The post Character.AI Implements New Safety Measures for Teen Users appeared on BitcoinEthereumNews.com. Tony Kim Oct 29, 2025 22:30 Character.AI announces significant changes to enhance the safety of its platform for users under 18, including removing open-ended chat and introducing age assurance tools. Character.AI Enhances Safety for Teen Users Character.AI has announced a series of significant changes aimed at enhancing the safety of its platform for users under the age of 18, according to the Character.AI Blog. These changes include the removal of open-ended chat capabilities and the introduction of new age assurance functionalities, set to be implemented by November 25, 2025. New Initiatives for User Safety In an effort to maintain a secure environment, Character.AI will be restricting users under 18 from engaging in open-ended conversations with AI on their platform. This decision is part of a broader strategy to ensure that teens can engage creatively with AI in a safe manner. Additionally, the platform will limit chat time for underage users to two hours per day, progressively reducing this limit ahead of the full implementation date. Character.AI is also rolling out an in-house developed age assurance model, complemented by third-party tools such as Persona, to ensure that users receive an age-appropriate experience. This model is a key part of the company’s commitment to safeguarding young users as they interact with AI. Establishment of the AI Safety Lab Further emphasizing its dedication to safety, Character.AI announced the creation of the AI Safety Lab, an independent non-profit organization. This lab will focus on advancing safety techniques for AI entertainment features. By collaborating with various stakeholders, including technology companies and researchers, the lab aims to foster innovation in safety alignment for next-generation AI applications. Rationale Behind the Changes The decision to implement these changes comes in response to growing concerns about how teens interact with AI.… The post Character.AI Implements New Safety Measures for Teen Users appeared on BitcoinEthereumNews.com. Tony Kim Oct 29, 2025 22:30 Character.AI announces significant changes to enhance the safety of its platform for users under 18, including removing open-ended chat and introducing age assurance tools. Character.AI Enhances Safety for Teen Users Character.AI has announced a series of significant changes aimed at enhancing the safety of its platform for users under the age of 18, according to the Character.AI Blog. These changes include the removal of open-ended chat capabilities and the introduction of new age assurance functionalities, set to be implemented by November 25, 2025. New Initiatives for User Safety In an effort to maintain a secure environment, Character.AI will be restricting users under 18 from engaging in open-ended conversations with AI on their platform. This decision is part of a broader strategy to ensure that teens can engage creatively with AI in a safe manner. Additionally, the platform will limit chat time for underage users to two hours per day, progressively reducing this limit ahead of the full implementation date. Character.AI is also rolling out an in-house developed age assurance model, complemented by third-party tools such as Persona, to ensure that users receive an age-appropriate experience. This model is a key part of the company’s commitment to safeguarding young users as they interact with AI. Establishment of the AI Safety Lab Further emphasizing its dedication to safety, Character.AI announced the creation of the AI Safety Lab, an independent non-profit organization. This lab will focus on advancing safety techniques for AI entertainment features. By collaborating with various stakeholders, including technology companies and researchers, the lab aims to foster innovation in safety alignment for next-generation AI applications. Rationale Behind the Changes The decision to implement these changes comes in response to growing concerns about how teens interact with AI.…

Character.AI Implements New Safety Measures for Teen Users

2025/10/30 08:51


Tony Kim
Oct 29, 2025 22:30

Character.AI announces significant changes to enhance the safety of its platform for users under 18, including removing open-ended chat and introducing age assurance tools.

Character.AI Enhances Safety for Teen Users

Character.AI has announced a series of significant changes aimed at enhancing the safety of its platform for users under the age of 18, according to the Character.AI Blog. These changes include the removal of open-ended chat capabilities and the introduction of new age assurance functionalities, set to be implemented by November 25, 2025.

New Initiatives for User Safety

In an effort to maintain a secure environment, Character.AI will be restricting users under 18 from engaging in open-ended conversations with AI on their platform. This decision is part of a broader strategy to ensure that teens can engage creatively with AI in a safe manner. Additionally, the platform will limit chat time for underage users to two hours per day, progressively reducing this limit ahead of the full implementation date.

Character.AI is also rolling out an in-house developed age assurance model, complemented by third-party tools such as Persona, to ensure that users receive an age-appropriate experience. This model is a key part of the company’s commitment to safeguarding young users as they interact with AI.

Establishment of the AI Safety Lab

Further emphasizing its dedication to safety, Character.AI announced the creation of the AI Safety Lab, an independent non-profit organization. This lab will focus on advancing safety techniques for AI entertainment features. By collaborating with various stakeholders, including technology companies and researchers, the lab aims to foster innovation in safety alignment for next-generation AI applications.

Rationale Behind the Changes

The decision to implement these changes comes in response to growing concerns about how teens interact with AI. Recent reports and feedback from regulators and safety experts have highlighted potential risks associated with open-ended AI chats. Character.AI’s proactive measures are intended to address these concerns and set a precedent for prioritizing safety in the rapidly evolving AI landscape.

Character.AI’s approach, which is notably more conservative than some of its peers, reflects its commitment to providing a safe and creative environment for teen users. The company plans to continue collaborating with experts and regulators to ensure that its platform remains a safe space for creativity and discovery.

Image source: Shutterstock

Source: https://blockchain.news/news/character-ai-implements-new-safety-measures-for-teen-users

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

Pavel's humanity, and Ton's challenges

Pavel's humanity, and Ton's challenges

I really like what Pavel mentioned about not using a mobile phone. Essentially, this is an "information fasting" approach to the challenges of information overload, contrasting with the "food fasting" that everyone loves using apps. One is metaphysical, the other is physical, but ultimately, both affect the mind and body, influencing hormones like cortisol. Now and in the future, attention is the scarcest resource. Being able to freely disconnect from electronic devices is a luxury, a freedom with its own barriers. Pavel is also an extreme craftsman. The advantage of being a craftsman is that you can lead a small team to create a killer app. However, the limitation is that Telegram, as the largest instant messaging software outside of China and the US, cannot become another Tencent platform. This same culture has also influenced its Web3 project, TON. By the way, let me talk about my close observation of TON over the past four years as the first Chinese institutional investor in the world. 1. The wrong technological path was taken. TON's stubborn insistence on using C++ seems like a kind of technological purist obsession. Historically, Russians have repeatedly taken the wrong turn on the "data technology tree": the Soviet Union failed to adapt to the transistor revolution, became obsessed with vacuum tube performance optimization, and missed the entire chip wave. They often overemphasize performance and control, but neglect the ecosystem and development experience. TON's SDK, toolchain, and documentation ecosystem lack standardization, making the development threshold too high; this is not a syntax problem, but a problem of lacking platform thinking. 2. Uneven ecological composition. Currently, it's basically only Russians and Chinese who are active, but resource allocation is clearly biased towards the Russian-speaking region. This is something everyone is already familiar with. 3. Oligopoly. Funding, traffic, and narrative resources within the ecosystem are concentrated on a few "top" companies/projects. Everyone knows they must curry favor with the "top" teams, but mid-tier projects are severely squeezed out. There is also a long-term power struggle between foundations and the oligopolistic "top" companies, resulting in constant internal friction. 4. Failure to accept oneself. Accepting and reconciling with oneself is crucial for any individual or organization. Only on this basis can you face yourself honestly and leverage your strengths while mitigating your weaknesses. However, TON seems obsessed with pitching to Musk, persuading American investors, and getting to the White House. The truth is, no matter how hard it tries, in the eyes of others, TON remains a public chain with a Russian background. In contrast, BNB didn't try to play the "American" role. Instead, it first became the most popular chain in the Eastern Time Zone, simultaneously creating a sense of FOMO (Fear of Missing Out) among Westerners, before smoothly expanding internationally—a much more effective approach. 5. The story of "adoption for 1 billion users" has been told for four years, and it's still just a story. Pavel keeps telling a grand story of "connecting Telegram's 1 billion users with the blockchain world," but this story has yet to truly materialize. The reason isn't that the vision is false, but rather structural constraints: In order to survive and ensure Pavel's personal safety (in recent years, Pavel has become increasingly obsessed with his physical safety, given several incidents, including the recent events in France), Telegram must maintain a "superficial" separation from TON to avoid crossing regulatory red lines; this separation prevents TON from ever truly integrating with Telegram's ecosystem. Even stablecoins like USDE have maintained a supply of only a few hundred million—indicating that the story is grand, but the reality is small. TON possesses the perfectionism of engineering geeks, yet lacks the warmth of ecological collaboration; it has a massive entry point, but is hampered by regulatory realities; it has its own advantages, but has not yet reconciled with itself. It has a narrative and ideals, but these need to be transformed into a sustainable balance of systems and incentives. I wish the TON ecosystem will continue to improve.
Share
PANews2025/10/30 14:00