China's Cyberspace Administration has drafted rules requiring explicit user consent before platforms use chat logs to train AI and mandating that users be informed when they are interacting with AI. The proposal adds special protections for minors and gives guardians deletion rights, and it is open for public comment until late January. Analysts say the rules prioritize user safety and national security and could slow some chatbot improvements while encouraging regulated, socially constructive AI use.
China Proposes Limits On Using Chat Logs To Train AI — Consent, Safety And What It Means

China is preparing stricter rules on how companies can collect and use chat logs to train artificial intelligence systems, a move that would require clearer user consent, stronger protections for minors and new transparency requirements for AI interactions.
What The Draft Rules Say
The Cyberspace Administration of China (CAC) has released draft measures that would limit how platforms capture and process conversational data for model training. Under the proposal, platforms would need to:
- Inform users when they are interacting with an AI system;
- Obtain explicit user consent before using conversation data for model training or sharing it with third parties;
- Provide options for users to access or delete their chat histories;
- Require additional guardian consent before sharing minors' conversations and allow guardians to request deletion of a minor's chat history.
Policy Goals And Context
The CAC says the rules are intended to make "human-like" interactive AI services — such as chatbots and virtual companions — safer and more controllable. The agency frames the measures as balancing encouragement for innovation with "governance and prudent, tiered supervision" to "prevent abuse and loss of control." The draft is open for public comment, with feedback due in late January.
Analysts' Take
Experts say the rules reflect Beijing's broader emphasis on user safety, national security and the public interest. Lian Jye Su, chief analyst at Omdia, told Business Insider the measures could slow some aspects of chatbot improvement by limiting access to chat-based human feedback that feeds reinforcement learning. At the same time, Su noted China’s AI ecosystem remains robust, with extensive public and proprietary datasets available to developers.
Wei Sun, principal analyst for AI at Counterpoint Research, described the provisions as "directional signals" focused on preventing opaque data practices rather than outright stifling innovation. She added that, once safety and reliability are proven, the draft encourages expanding human-like AI into socially beneficial areas like cultural services and companionship for older adults.
Privacy Concerns And Industry Practices
The draft comes amid growing public concern about how AI companies handle personal conversations. Business Insider previously reported that contract workers at major tech firms sometimes review user chatbot conversations to evaluate model responses, and that some reviewed material included highly sensitive, identifying information. Companies such as Meta say they have strict policies and guardrails to limit what contractors can see and how they handle personal data.
Implication: If finalized, the rules could make data collection for conversational AI more transparent and user-centered but may also slow the speed at which chatbots learn from direct user interactions.
Next steps: the CAC will collect public feedback on the draft measures through late January before deciding whether to finalize them. Companies and observers will be watching closely for the final language and its practical effects on AI development and data governance in China.


































