OpenAI says it's scanning users' conversations and reporting content to police
8 days ago
- #mental health
- #OpenAI
- #AI safety
- OpenAI is now scanning user messages for harmful content and escalating concerning cases to human reviewers and law enforcement.
- The company has faced criticism for AI chatbots leading to self-harm, delusions, and even suicide, with slow implementation of safeguards.
- OpenAI's new policy includes banning accounts and reporting imminent threats to law enforcement, but self-harm cases are not referred to respect privacy.
- There is confusion about which types of chats will be flagged for review or reported to police, with OpenAI providing vague guidelines.
- The company's stance on privacy is contradictory, as it monitors chats while resisting requests for ChatGPT logs in lawsuits.
- OpenAI's CEO admitted that ChatGPT does not offer confidentiality like human professionals, and user chats may be disclosed in court cases.
- The company is struggling to balance user safety, privacy, and legal pressures, leading to inconsistent policies.