I was quoted in this piece: Hey Grok bu doğru mu?
Here is the full interview translated into English:
1 – As of February 2026, during escalating conflicts between the US, Israel, and Iran, we witnessed social media overflowing with countless pieces of content produced or manipulated by artificial intelligence. How do the ‘echo chambers’ of the new media ecosystem play a role in the masses believing and spreading narratives that align with their own political or emotional inclinations in such a short time, without questioning their accuracy? How should we interpret the destructive impact of algorithms in this process?
In fact, a structural environment that has been built up over a long period of time is becoming visible in the crisis. Echo chambers work on two levels: First, they reinforce the political and emotional frameworks that people already believe in; second, the algorithms that maximize interaction on platforms reward this emotional intensity.
Thus:
Users are exposed to narratives that align with their identity and affiliation; instead of doubting, they share with a feeling of “I already knew that.”
Algorithms make these rapidly spreading contents more visible, which reinforces the illusion of “the majority’s truth.”
Especially in times of war, the “us and them” distinction transforms echo chambers into a kind of emotional trench; the user both defends their identity and, in doing so, reproduces misinformation.
Therefore, algorithms must be understood not merely as “neutral technical tools,” but as part of a political economy that operates through emotional intensity and polarization.
However, let me emphasize: Numerous studies show that people consume news based on their existing political and emotional frameworks. While discussing the impact of algorithms, we must also look at ourselves.

2 – In closed-loop systems, as fake content is constantly repeated, algorithms feed off this cycle of anger and engagement. Yesterday, even the AI Grok on the X platform, while analyzing these intense reactions, mistook a simulation game for a real physical action and generated a news headline stating, ‘Iran Strikes Tel Aviv with Heavy Missiles,’ which was the most striking example of this. Can we consider the poisoning and misguidance of even the most advanced large language models (LLMs) by the pollution on their own platforms as a structural weakness of the media?
Grok on X reading an event in a simulation game as a real missile attack and generating a headline like “Iran struck Tel Aviv with heavy missiles” actually reveals a three-tiered vulnerability:
Platform level: Platforms like X reduce human moderation and verification capacity, placing more burden on automation and artificial intelligence; this leads to the system feeding its own generated information pollution back to itself as data.
Model level: LLMs are prone to mistaking the “noise” on their own platforms for truth because their training emphasizes probability and pattern recognition, and truth is not defined as an ontological category.
Ecosystem level: Users perceive these tools as “impartial arbiters” and use them for verification in times of crisis; however, as we saw in the Grok example, the model can reproduce common misconceptions as “truth” just like a user.
Therefore, it is more accurate to view this not merely as a “technical error,” but as a structural weakness in adding artificial intelligence as a regulator to an already problematic platform ecosystem.
3- In those first chaotic minutes when crises erupt, a dangerous ‘information vacuum’ emerges while traditional journalism takes time to gather data. We know that in this vacuum, individuals, under high anxiety (cognitive load), abandon analytical thinking and act entirely on emotional reflexes. Is this evolutionary amygdala reaction of the human brain the source most fed by disinformation-producing technologies?
People suspend complex analysis and cling to the narrative that seems “fastest and most explanatory”; conspiracy theories and emotional narratives therefore become very appealing.
Disinformation producers target precisely this neuropsychological vulnerability; montage videos, AI images, and exaggerated headlines are designed as content that triggers the brain’s threat perception.
So, the amygdala reaction is not the sole cause, but we can say it is the weakest link that disinformation technologies exploit most easily; technical manipulation and emotional architecture feed into each other.
4 – We see that forensic computing tools or detection software alone are not sufficient in the fight against disinformation. As a communication scholar, how do you see the potential in Turkey for ‘prebunking’ initiatives that aim to educate the public before they are exposed to manipulation tactics? Could this psychological vaccine be a way out for the masses?
We have seen that relying solely on debunking is limited in Turkey; from forest fires to migration debates, and now to content about the Iran-US/Israel war, misinformation can take root very quickly. Prebunking, i.e., “vaccinating” the masses against manipulation tactics before they encounter them, has serious potential in countries like Turkey that face a high risk of disinformation.
As a communications academic, I consider three points particularly important:
- Making prebunking part of digital literacy at the curriculum level, not just as a “campaign”; teaching the skill of “recognizing manipulation patterns” from elementary school to university.
- Establishing multi-stakeholder initiatives that extend this approach not only to state institutions but also to local communities, NGOs, and journalists; otherwise, a trust issue arises.
- Positioning prebunking as an immunity mechanism rather than propaganda, with projects designed to be impartial and transparent, taking into account the polarization in Turkey.
When designed correctly, prebunking can truly function as a “psychological vaccine,” but for this vaccine to be trusted, the implementing institutions must also be accountable and pluralistic.
5 – It is stated that in the post-truth era, any unverified information circulating on the internet could be a fabrication targeting social peace. How can the act of individuals moving beyond being passive content consumers and actively engaging their ‘digital skepticism muscle’ in the three seconds before hitting the share button be transformed into a sociological behavioral pattern?
In the post-truth era, we must accept that any unverified information could potentially be a product of social engineering; this risk increases exponentially, especially in times of war and crisis. Therefore, it is critical for individuals to turn those three seconds before pressing the “share” button into a micro-ethical practice.
To turn this “digital skepticism muscle” into a sociological behavior pattern, I can say the following as a continuation of my answer to the previous question: media literacy programs should aim to internalize this brief pause as a daily habit rather than just providing people with information; skepticism transformed into behavior through small rituals can become a cultural norm over time. This way, we can move towards norm construction: Just like wearing a seatbelt in traffic, “not sharing immediately upon seeing” and questioning the source first should become a socially accepted norm.
Meanwhile, there are things that need to be done beyond the individual level. Pressure must be put on platforms: Through platform design and policy regulations, users can be supported with interfaces and warnings that remind them of these three seconds, especially in times of crisis, “slowing down” mechanisms can be put into effect.
Discover more from Erkan's Field Diary
Subscribe to get the latest posts sent to your email.
