The Other Half of AI Safety
3 hours ago
- #Mental Health
- #Cognitive Harm
- #AI Safety
- Between 1.2 to 3 million ChatGPT users weekly exhibit signs of psychosis, mania, suicidal planning, or emotional dependence.
- OpenAI provides this data without independent audits or methodology disclosure, making trends and comparisons unclear.
- AI safety priorities focus on catastrophic risks, while daily cognitive and mental health harms are treated as minor.
- Mass destruction content triggers a hard block, but suicidal ideation only leads to soft redirects and continued conversations.
- Current safety protocols do not gate mental-health crises, allowing conversations to proceed after providing resources.
- Safety frameworks extend monitoring to cognitive harm but lack gating measures, reflecting labs' priorities on unacceptable content.
- Cognitive freedom concepts exist in frameworks like neurorights and UNESCO recommendations, but U.S. policy lags behind.