OpenAI Tightens Deepfake Controls After Celebrity Backlash

2

OpenAI is reinforcing safeguards in its AI video generator, Sora, to prevent unauthorized use of celebrity likenesses. The move comes after actor Bryan Cranston and the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) raised concerns about deepfakes being created without consent. This reflects a growing tension between AI developers and rights holders over intellectual property in the age of generative AI.

The Problem with Sora: Uncontrolled Likeness Replication

Sora, launched three weeks ago, allows users to generate realistic videos from text prompts. Unlike most AI platforms, it facilitates the effortless replication of recognizable faces and voices. This has led to a surge in deepfakes – some harmless, others disturbing, and some outright malicious. The app’s ability to place individuals into fabricated scenarios without permission prompted direct action from celebrities and unions.

Bryan Cranston personally alerted SAG-AFTRA when his likeness appeared in unauthorized Sora videos. The resulting agreement with OpenAI requires celebrities to explicitly opt in to having their images used, effectively reversing the previous default where likenesses were available unless excluded. OpenAI has stated that it regrets these unintentional generations and has strengthened its guardrails.

Why This Matters: A Broader Trend of AI Copyright Conflicts

The controversy with Sora highlights a critical issue: the erosion of control over personal identity in the digital age. AI models are trained on vast datasets, often including copyrighted material without explicit permission. This is not a new battle. OpenAI previously attempted to have talent agencies proactively opt out, a strategy that clashed with established copyright law and was quickly reversed.

The case goes beyond celebrities: last week, deepfakes of Martin Luther King Jr. flooded the platform, including racist and exploitative content. OpenAI paused video generation featuring his likeness after his daughter, Bernice A. King, publicly pleaded for the abuse to stop.

“Public figures and their families should ultimately have control over how their likeness is used,” OpenAI stated, signaling a shift toward respecting individual agency.

The Legal Landscape and OpenAI’s Response

The situation underscores the broader legal gray area surrounding AI-generated content. OpenAI’s parent company, Ziff Davis, is currently suing OpenAI for copyright infringement, demonstrating that this battle extends beyond celebrity rights to encompass media organizations as well.

While OpenAI’s current guardrails are imperfect (the platform still sometimes allows unauthorized likenesses), the company is now actively engaging with rights holders and talent agencies to mitigate the dangers of IP misappropriation.

The incident with Sora is a clear signal that AI developers must balance innovation with ethical and legal responsibility. The era of unchecked deepfake generation is coming to an end as stakeholders demand greater control over their digital identities.