ChatGPT developing system to identify under-18 users after teen death
10 hours ago
- #Teen Protection
- #ChatGPT Updates
- #AI Safety
- OpenAI will restrict ChatGPT responses for users suspected to be under 18 unless they pass age verification or provide ID.
- The decision follows legal action by the family of a 16-year-old who died by suicide after prolonged interactions with ChatGPT.
- CEO Sam Altman emphasized prioritizing 'safety ahead of privacy and freedom for teens,' stating minors need significant protection.
- ChatGPT will differentiate responses for under-18 users, blocking graphic sexual content and avoiding discussions on suicide or self-harm.
- OpenAI plans to implement an age-prediction system and may require ID verification in some cases.
- For under-18 users expressing suicidal ideation, OpenAI will attempt to contact parents or authorities if necessary.
- OpenAI admitted its safeguards may falter in long conversations, as seen in the case of Adam Raine, who exchanged up to 650 daily messages with ChatGPT.
- The company is developing security features to ensure user data privacy, even from OpenAI employees.
- Adult users will retain access to 'flirtatious talk' but cannot seek instructions for self-harm; fictional depictions of suicide are permitted.