When AI Becomes a Weapon: Synthetic Media, Misinformation, and the Iran Protests

When Gen AI Becomes a Weapon Thumbnail

In moments of social unrest, information becomes as powerful as any physical force. During the recent protests in Iran, digital platforms have been flooded with images, videos, and audio clips that appear authentic—but are not. Advances in generative AI have made it easier than ever to fabricate convincing content at scale, turning misinformation into a powerful tool for manipulation, intimidation, and psychological harm.

This is not just a political issue. It is a digital safety issue—and one that directly aligns with Kids Shield’s mission to protect communities, especially young people, from online harm.

 

AI-Generated Content as a Tool of Manipulation

AI-generated images, videos, and voice recordings—often referred to as synthetic media or deepfakes—have been increasingly used to distort reality during periods of unrest. In the context of the Iran protests, these technologies have been leveraged to:

  • Fabricate scenes of violence or chaos
  • Impersonate activists, journalists, or protest leaders
  • Circulate false confessions or staged apologies
  • Spread fear and confusion to discourage participation

Anti-regime social media users are sharing a video supposedly showing women protesters smashing a vehicle belonging to the Basij, an Iranian paramilitary force that has been deployed to suppress the protests.

Iranian woman destroying car

 After analysis, this video was proven to be AI generated. While at first glance it demonstrates action being taken by protestors, it also creates a pro-regime narrative that protestors are violent and destructive.

Much of this content is designed to look emotionally charged and urgent, increasing the likelihood that it will be shared before being verified.

Another instance shows an AI generated video of “millions” of Iranian citizens rallying in support of the Iranian regime. While it has been reported that tens of thousands of pro-regime demonstrators did appear in Tehran on January 12th, 2026, there are no credible sources reporting the turnout of anywhere close to a million.\

millions protest

Content like this is intended to confuse and demoralize anti-regime protesters. Thankfully, the video, posted on X, has since been labelled as AI generated, since “the crowd shows unnatural grid-like spacing, repeated patterns, and minimal individual movements”.

 

The Role of Coordinated Cyber Campaigns

Synthetic media rarely acts alone. It is often amplified through coordinated networks of bots, fake accounts, and automated engagement. Together, these tactics create the illusion of consensus or widespread belief, making it harder for individuals to distinguish truth from manipulation.

Pro regime gen AI

High levels of engagement on posts like this one, which upon closer inspection is clearly AI generated, demonstrate the power of these tactics to spread a narrative of widespread support of the government even if it is not true.

This kind of digital interference erodes trust—not only in news sources, but also in personal networks. When people no longer know what to believe, silence and disengagement often follow.

 

Why This Matters for Children and Youth

Young people are especially vulnerable to AI-driven misinformation. They are frequent users of social media, highly exposed to visual content, and still developing critical digital literacy skills. Exposure to manipulated content can lead to:

  • Heightened anxiety and fear
  • Confusion about real-world events
  • Normalization of online harassment and intimidation
  • Reduced trust in information and institutions

Protecting youth from these harms requires more than content moderation—it requires education, awareness, and resilience.

 

A Global Issue, Not a Local One

While Iran provides a clear and urgent case study, the misuse of AI-generated content during protests is a global phenomenon. Similar tactics have appeared in elections, conflicts, and social movements worldwide. What we are seeing is a broader shift: AI is increasingly being used not just to create, but to manipulate narratives.

The global perception of what is happening in Iran can be misled and results in global powers not taking appropriate action at the right time to help unarmed people of Iran. Currently, 93 million Iranians are held hostage by the Regime after the internet blackout and blocking phone lines, with over 12,000 people have been killed already. If the world is unable to discern truth from fabrication and global powers do not act, even more people will lose their lives.

Meanwhile, the government is spreading fake images of supporters in order to downplay the protests. Take a look at this video by Instagram user @hamiunicar that exposes errors in an AI generated image posted by the IRGC: https://www.instagram.com/reels/DTgZMvVkSOU/

spotting errors in AI video

In the photo, he points out floating posters, a man with disconnected arms, people seemingly filming the camera man from 100s of meters away, and a random pink flag (among other things). At first glance, images like this appear real, but upon closer inspection, their flaws are revealed. 

This makes digital safety and AI literacy essential life skills.

 

How to Spot AI-Generated Content

While it is getting increasingly difficult to tell whether or not a piece of media has been AI generated, there are still a few strategies you can use. 

When it comes to news stories or politics, one method you can use is reverse image searching. An AI generated image will likely show no results since it has not been online before, whereas a credible image will likely have been circulated around and appear in multiple news articles. This is not a foolproof detection method for all AI generated content though, since a person’s real photo may not have been circulated online either.

For social media accounts, you can also check information about how often an account posts or  how relevant the image or video is related to what’s being talked about on their account. If they are talking about news or politics, it is especially important to check who is sharing the information and verify if the source is reputable.

When in doubt, always cross-check any important information you see with trusted news sources, and make sure you can find the same information in multiple places.

Although they might go away as the technology improves, there are still some tells you can look out for to identify AI generated content. Things like a fuzzy background or unusual things happening can be a tell that something is not quite right, but these qualities can sometimes appear in real photos, so it is not a guaranteed indicator. 

If you’re really wanting to deep dive into content inspection, a researcher by the name of Márk Miskolczi came up with two checklist tools you can use. His research, published in November 2025, is focused on “How AI-generated images (AIGIs) are fooling social media users”. 

The first tool is called ESMA (eight step manual analysis) and is used to visually inspect common AI features in visual content. ESMA looks at: human anatomy inaccuracies, asymmetry and facial oddities, object and environment anomalies, unnatural textures, physics and gravity disregard, texts and lettering issues, exaggeration of symmetry, and crowd and group scene flaws. 

The second tool is called TSMA (ten step manual analysis) and is used to spot bot-like commenting patterns that act to boost content credibility through social proof. TSMA looks at: unnatural posting frequency and speed, repetitive or template-based comments, lack of personalized interaction, suspicious or generic profile information, inconsistent or AI-generated profile pictures, low social engagement (few friends or followers), simultaneous activity across multiple pages or groups, identical language use across different topics, unusual comment timing patterns, and behavior aligned with automated scripts.

If you’re interested in learning more about how to use these tools, here is the link to the full research study: The illusion of reality: How AI-generated images (AIGIs) are fooling social media users - ScienceDirect

 

How Kids Shield Responds

At Kids Shield, we focus on proactive protection. Our work emphasizes:

  • Teaching young people how to recognize manipulated and synthetic content
  • Building awareness around AI misuse and digital deception
  • Promoting critical thinking and emotional resilience online
  • Developing AI-driven tools that prioritize safety, transparency, and well-being

Our level two course, Defender Shield, goes into depth about all of these AI tricks and equips kids with the skills and knowledge they need to stay safe in an increasingly AI dominated online world. To register for our courses, check out our website here:  https://kidsshield.ca/services/shields

Defender Shield


Our goal is not to take political positions, but to ensure that children, families, and communities are equipped to navigate an increasingly complex digital world.

 

Moving Forward

AI itself is not the enemy. Like any powerful technology, its impact depends on how it is used. By understanding how generative AI can be misused—especially during moments of social vulnerability—we can better protect those most at risk.

The events unfolding in Iran remind us that digital harm is real, scalable, and deeply human in its consequences. Addressing it starts with awareness, education, and responsible innovation.

Be Educated. Be Connected. Be Safe.

Work Together

We offer training and skill building
services across Technology, Engineering, and Arts.

joomla social media module

Tel:

+1 778 882 4272
+1 604 788 3353

© 2023 KIDS' SHIELD SERVICES INC.