China Proposes Strict New Rules to Curb AI Companion Addiction

A key component of the draft is a requirement that providers warn users against excessive use.

Leon Neal/Getty Images
China’s DeepSeek Chatbot app, shown in a photo illustration here, and similar apps would be subject to stiff new requirements under a proposed Chinese rule. Leon Neal/Getty Images

China’s cyber regulator has issued proposed rules aimed at tightening oversight of artificial intelligence services that are designed to simulate human personalities, marking the most aggressive regulatory response yet to growing concerns over AI-powered relationships.

The Cyberspace Administration of China released the proposed regulations on Saturday, targeting AI products that form emotional connections with users via text, audio, video, or images. The draft requires service providers to actively monitor users’ emotional states and intervene when signs of addiction or “extreme emotions” appear.

Under the proposal, AI providers would assume safety responsibilities throughout the product life cycle, including establishing systems for algorithm review and data security.

A key component of the draft is a requirement to warn users against excessive use. Platforms would need to remind users they are interacting with an AI system upon logging in and at two-hour intervals — or sooner if the system detects signs of overdependence, Reuters reports.

If users exhibit addictive behavior, providers are expected to take necessary measures to intervene. The draft also reinforces content red lines, stating that services must not generate content that endangers national security, spreads rumors, or promotes violence or obscenity.

The regulatory push coincides with a surge in adoption of the technology. China’s generative AI user base has doubled to 515 million over the past six months, heightening the concern over the psychological impact of AI companions.

A study published in Frontiers in Psychology found that 45.8 percent of Chinese university students reported using AI chatbots in the past month, with these users exhibiting significantly higher levels of depression compared to non-users.

A March 2025 study from the MIT Media Lab suggested that AI chatbots can be more addictive than social media because they consistently provide the feedback users want to hear. Researchers termed high levels of dependency as “problematic use,” noting that users often anthropomorphize the AI, treating it as a genuine confidante or romantic partner.

China is not the only jurisdiction moving to regulate this sector. In October, Governor Gavin Newsom of California signed SB 243 into law, making California the first U.S. state to pass similar legislation.

Set to take effect on January 1, the California bill requires platforms to remind minors every three hours that they are speaking to AI and mandates age verification. It also allows individuals to sue AI companies for violations, seeking up to $1,000 per incident.

While the regulatory intent is clear, the practical implementation of China’s draft rules face significant hurdles. Defining “excessive use” or detecting psychological distress via text inputs remains a complex technical challenge.

The draft is currently open for public comment. If implemented as proposed, China would establish the world’s most prescriptive framework for governing AI companion products.


The New York Sun

© 2025 The New York Sun Company, LLC. All rights reserved.

Use of this site constitutes acceptance of our Terms of Use and Privacy Policy. The material on this site is protected by copyright law and may not be reproduced, distributed, transmitted, cached or otherwise used.

The New York Sun

Sign in or  create a free account

or
By continuing you agree to our Privacy Policy and Terms of Use