AI-Generated War Fakes Fuel Chaos in Iran Conflict

15

A flood of artificial intelligence (AI)-generated videos and images has overwhelmed social media platforms during the recent escalation of conflict involving Iran, creating widespread confusion and distrust. These fakes, depicting non-existent explosions, nonexistent cities under attack, and fabricated troop movements, have been viewed millions of times across X (formerly Twitter), TikTok, and Facebook.

The New York Times identified over 110 unique AI-generated pieces of content in the past two weeks alone. The fabricated material ranges from dramatic scenes of Israelis sheltering from nonexistent airstrikes in Tel Aviv to staged mourning in Iran, and even entirely fictional attacks on U.S. naval vessels.

Why this matters: The rapid advancement of AI tools now allows almost anyone to create convincing war simulations with minimal effort and cost, making it increasingly difficult to distinguish between reality and disinformation. This isn’t just about isolated incidents; it’s a systemic vulnerability that undermines trust in information and potentially escalates conflict. The war in Ukraine demonstrated how quickly AI can be used to spread propaganda, but the current conflict shows an even faster proliferation of fake content, partly due to the multiple active fronts.

Weaponized Disinformation

The proliferation of AI-generated fakes isn’t accidental. Experts at Cyabra, a social media intelligence company, found that the majority of AI videos about the war actively promote pro-Iranian narratives. The purpose: to exaggerate the perceived devastation and cost of the conflict for the United States and its allies.

One widely circulated fake video depicts a missile strike on Tel Aviv, complete with an Israeli flag intentionally included to authenticate the fabrication. AI tools will often insert such symbols when prompted to create realistic-looking war footage. The video was shared across multiple platforms and picked up by fringe news outlets, demonstrating how easily these fakes can gain traction.

The Iranian government appears to be deliberately leveraging these tools to shape public opinion. By falsely portraying superior military capabilities and widespread destruction, Tehran aims to undermine support for continued military action.

The Blurred Line Between Real and Fake

Genuine footage of the conflict exists, often captured by bystanders on cell phones. However, even real videos are sometimes enhanced with AI tools to make explosions appear larger or more dramatic, further blurring the line between authenticity and manipulation.

The U.S.S. Abraham Lincoln incident exemplifies this chaos. After Iran’s Islamic Revolutionary Guards Navy claimed a successful attack on the carrier, a deluge of AI-generated fakes depicting the ship ablaze flooded social media. Despite U.S. claims that the attack failed, the fabricated images fueled Iranian celebrations and reinforced false narratives.

The Failure of Platform Regulation

Social media companies have struggled to contain the spread of AI-generated war content. While some platforms, like X, have announced limited measures – such as suspending monetization for unlabeled AI depictions of armed conflict – enforcement remains weak. Many accounts spreading disinformation are not motivated by profit but by a deliberate effort to weaponize the narrative.

As Valerie Wirtschafter, a fellow at the Brookings Institution, notes, “This is a natural front for Iran to try and exploit, and it feels like this is one of the reasons it is so voluminous. It’s actually a tool of war.”

The bottom line: AI-generated disinformation is now an integral part of modern warfare, capable of manipulating public perception and potentially escalating conflicts. The current situation in Iran demonstrates how easily these tools can be weaponized, and the lack of effective regulation suggests that this problem will only worsen.