
I am glad to be part of this project. Our report is authored by Anmol Alphonso, Sérgio Barbosa, Cayley Clifford, Kiran Garimella, Elonnai Hickok, Martin Riedl, Erkan Saka, Herman Wasserman, Sahana Udupa.
The full policy report is available here. The book, WhatsApp in the World: Disinformation, Encryption and Extreme Speech (New York University Press, 2025), can be accessed here.
The research group on encrypted messaging and extreme speech (2024-2025), which developed the policy report, was supported by the Center for Advanced Studies, LMU Munich. Sahana Udupa is the corresponding author.
Here is the full report
Summary of “Policy Directions on Encrypted Messaging and Extreme Speech” (TechPolicy.Press, Aug 22, 2025):
The article analyzes the role of encrypted messaging platforms, especially WhatsApp, in the spread of extreme speech—content that blurs truth/falsity and civility/incivility. While these platforms are vital for free expression, civic mobilization, and journalism, they are also frequently exploited to entrench hierarchies, propagate disinformation, and legitimize conspiracy theories.
Policy and Regulatory Trends:
Recent regulatory efforts in the EU (ProtectEU initiative), UK (expansion of Investigatory Powers Act, Online Safety Act), and countries like Brazil, Uganda, and Zambia attempt to break or limit encryption for law enforcement but face heavy resistance from digital rights advocates, technology companies, and civil society. These measures risk undermining fundamental cybersecurity and human rights principles.
Tech companies, such as Apple, have withdrawn privacy features from some markets rather than weaken encryption, and Meta has shifted to more “crowdsourced” moderation, reducing platform oversight.
Key Challenges Identified:
Contradictions: While encryption promises privacy, governments can bypass it through coercion or physical methods.
Social Dynamics: In regions like Turkey, WhatsApp is embedded in trusted family and community networks, making it fertile ground for “deep extreme speech” driven by microtargeted messaging and local influencers.
Fact-Checking Difficulties: Encryption makes external moderation challenging; users rarely report problematic content, and emotional or network-driven falsehoods often persist despite debunking.
AI and Disinformation: Advances in generative AI are democratizing content creation, but also enabling sophisticated disinformation at scale.
Recommendations:
Policy must move away from undermining encryption, focusing instead on supporting human rights-based moderation, strengthening user reporting, and enhancing metadata analysis rather than content inspection.
Governments should create legal frameworks for researcher data access and support fact-checkers with dedicated channels on messaging platforms.
Collaborative, risk-assessed, and context-sensitive approaches involving stakeholders (including civil society and the technical community) are urged over indiscriminate surveillance or technical backdoors.

Conclusion:
Removing or weakening encryption is not the answer to extreme speech or disinformation. Instead, the authors advocate for robust, context-sensitive platform governance and content moderation, international legal standards, and responsible AI integration to protect both user safety and democratic values. The article comes from a research collective supported by LMU Munich and references their full policy report and recent academic work on the subject.techpolicy
Discover more from Erkan's Field Diary
Subscribe to get the latest posts sent to your email.
