AI-Generated Child Sexual Abuse Material Proliferates on TikTok

4

Artificial intelligence (AI) is being exploited to create and distribute thousands of sexually suggestive videos depicting minors on TikTok, despite the platform’s prohibition of such content. A recent report by Spanish fact-checking organization Maldita found over 5,200 videos from more than 20 accounts, amassing nearly 6 million likes and a combined 550,000 followers.

The Scale of the Problem

The videos feature young girls in revealing attire, including bikinis and school uniforms, and in suggestive poses. Maldita’s analysis revealed that many of these accounts actively profit from the content through TikTok’s subscription service, where creators receive monthly payments for exclusive access. TikTok itself takes roughly 50% of these profits.

Worse yet, the comments sections on these videos frequently contain links to Telegram groups known for selling child pornography. Maldita reported 12 of these Telegram groups to Spanish police.

Circumventing Regulations

TikTok’s community guidelines prohibit harmful content and require creators to label AI-generated videos. However, Maldita found that the vast majority of the analyzed videos lacked any AI-identification watermarks, making them difficult to detect. Some videos did display a “TikTok AI Alive” watermark used for turning still images into videos, but this was not widespread.

The fact that these accounts thrive suggests existing moderation measures are not working effectively. The report raises serious questions about the platform’s ability to protect children from exploitation.

Platform Responses and Global Concerns

TikTok claims to remove 99% of harmful content automatically and 97% of offending AI-generated material proactively. The platform states it immediately suppresses or closes accounts sharing child sexual abuse material and coordinates with the US National Center for Missing and Exploited Children (NCMEC). Between April and June 2025, TikTok removed over 189 million videos and banned 108 million accounts.

Telegram asserts it scans all public uploads against known child sexual abuse material, removing over 909,000 groups and channels in 2025 alone. The platform argues its moderation is effective because criminals must rely on private groups and other platforms to distribute such content.

The findings come as several countries – including Australia, Denmark, and the European Union – consider stricter social media restrictions for minors to enhance online safety.

The Bottom Line

Despite aggressive content moderation claims, AI-generated child sexual abuse material continues to spread on TikTok and other platforms. This suggests that current safeguards are insufficient and that more stringent measures, including enhanced AI detection tools and increased law enforcement cooperation, are urgently needed to protect minors online.