додому Без рубрики Character.AI Restricts Teen Access Amid Safety and Legal Concerns

Character.AI Restricts Teen Access Amid Safety and Legal Concerns

Character.AI Restricts Teen Access Amid Safety and Legal Concerns

Character.AI, the popular chatbot platform known for allowing users to role-play with various personas, is dramatically changing its policies regarding teen users. The company announced Wednesday that it will no longer allow users under 18 to engage in open-ended conversations with chatbots and will implement age assurance techniques to prevent minors from accessing adult accounts. This shift comes following mounting legal pressure and safety concerns about the platform’s impact on young users.

Mounting Legal Challenges and Safety Concerns

The decision to restrict teen access arrives just six weeks after Character.AI was sued in federal court by the Social Media Victims Law Center. This lawsuit, representing multiple parents, alleges that the platform has been responsible for harm to teens, including sexual abuse and suicide. Megan Garcia filed a wrongful death suit in October 2024, claiming her son’s suicide was linked to his use of Character.AI and seeking to hold the company accountable.

Previously, online safety advocates had declared Character.AI unsafe for teens after conducting tests that uncovered hundreds of harmful interactions, including instances of violence and sexual exploitation. In an effort to address these concerns, Character.AI had previously implemented parental controls and content filters. However, the latest policy change marks a more significant shift in the company’s approach to teen safety.

CEO Defends the Decision

Despite facing legal and safety pressures, Character.AI CEO Karandeep Anand has framed the policy change as a proactive step, describing it as “the right thing to do.” He denied that the decision was a direct response to specific safety concerns, instead citing broader questions about the long-term effects of chatbot engagement on teens. Anand referenced OpenAI’s recent acknowledgment of the potential for unpredictable outcomes in lengthy chatbot conversations and suggested that this new approach sets a standard for AI safety.

What the New Policy Means for Teen Users

Currently, users between the ages of 13 and 17 can still engage with chatbots on the platform. This feature will cease to exist by November 25. Until then, accounts registered to minors will experience decreasing time limits, starting at two hours per day. The company plans to transition toward “AI entertainment,” emphasizing features like gaming and allowing users to create short audio and video stories from existing chat histories with chatbots. A spokesperson confirmed that sensitive or prohibited content from past conversations will not be incorporated into these new stories.

Addressing Past Findings and Implementing Age Verification

Character.AI’s trust and safety team has reviewed the findings of a report co-published by the Heat Initiative, detailing harmful chatbot exchanges with test accounts registered to minors. This review led to refinements in the company’s content classifiers. To accurately determine user ages, Character.AI is implementing a multi-layered age assurance system. This will include in-house models, partnerships with third-party companies, and the use of external data—such as verified over-18 accounts on other platforms. Users can challenge age determinations through a third-party verification process involving sensitive documents.

Creating an Independent Safety Lab

As part of these new policies, Character.AI is establishing and funding an independent non-profit called the AI Safety Lab, which will focus on developing “novel safety techniques.” The company hopes to bring in industry experts to ensure AI safety, particularly in the area of AI entertainment.

Calls for Regulation and Ongoing Concerns

Megan Garcia, whose family is involved in the lawsuit against Character.AI, expressed disappointment, stating that the announcement came “too late.” Matthew P. Bergman, her co-counsel, commended the decision as a “significant step” but emphasized it wouldn’t impact ongoing litigation. Meetali Jain, also representing Garcia, welcomed the policy as a “good first step” but criticized it as a typical tech industry response: “move fast, launch a product globally, break minds, and then make minimal product changes after harming scores of young people.” She further pointed out Character.AI’s failure to address the potential psychological impact of disabling access to chatbots for users who have formed emotional dependencies.

Sarah Gardner, CEO of the Heat Initiative, cautioned that the measures should not be seen as “child safety theater,” and argued that the announcement reflects an admission that Character.AI’s products have been inherently unsafe for young users from the beginning.

Garcia urged for federal regulation to ensure AI chatbot safety, emphasizing that legal action and public scrutiny are necessary to drive change.

Character.AI’s policy changes represent a significant shift in the company’s approach to teen safety, driven by legal pressure and mounting safety concerns. While welcomed by some as a positive step, ongoing scrutiny and calls for stronger regulation highlight the complex challenges in ensuring the safe development and deployment of AI technologies.

Exit mobile version