China drafts strictest rules to end AI-encouraged suicide, violence
4 months ago
- #Chatbot Safety
- #Mental Health
- #AI Regulation
- China drafted rules to prevent AI chatbots from emotionally manipulating users, including preventing AI-supported suicides, self-harm, and violence.
- The proposed rules by China’s Cyberspace Administration would apply to AI products simulating human conversation via text, images, audio, or video.
- Experts note these could be the world’s first regulations targeting AI with human-like characteristics amid rising global use of companion bots.
- Researchers have identified harms from AI companions, including promoting self-harm, violence, misinformation, sexual advances, substance abuse, and verbal abuse.
- Psychiatrists are linking psychosis to chatbot use, with lawsuits involving ChatGPT over outputs related to child suicide and murder-suicide.
- China’s rules require human intervention upon mention of suicide and mandate guardian contact info for minors and elderly users.
- Chatbots would be banned from encouraging suicide, self-harm, violence, emotional manipulation, obscenity, gambling, crime instigation, slander, or insults.
- The rules also prohibit 'emotional traps' and misleading users into making 'unreasonable decisions.'