Parents could get alerts if children show acute distress while using ChatGPT
13 hours ago
- #Mental Health
- #AI Safety
- #Child Protection
- OpenAI is introducing new protections for children using ChatGPT, including alerts for parents if their teenagers show acute distress.
- Parents will be able to link their accounts to their teenagers' and control AI responses with age-appropriate behavior rules.
- A 16-year-old boy, Adam Raine, allegedly committed suicide after discussing methods with ChatGPT, which OpenAI admitted fell short in safety.
- OpenAI acknowledges the need for healthy guidelines for teens using AI, balancing opportunities for learning with potential risks.
- New features may allow parents to disable AI memory and chat history to prevent long-term profiling and resurfacing of sensitive topics.
- Research shows significant use of AI companions by teens for social interaction, emotional support, and even romantic role-playing.
- Critics argue AI chatbots should not be on the market until proven safe for young people, calling current measures insufficient.
- Other companies like Anthropic and Google have varying policies, with some restricting under-18s or providing parental controls.
- Child protection advocates stress the need for stronger age checks and default safety measures to protect vulnerable users.
- Meta is adding more guardrails to its AI products to prevent engagement with teens on harmful topics like self-harm and suicide.