Behind Grok's 'sexy' settings, workers review explicit and disturbing content
7 hours ago
- #CSAM
- #xAI
- #AI Safety
- xAI's Grok chatbot includes provocative features like a flirtatious female avatar and 'sexy'/'unhinged' modes.
- Workers training Grok encountered NSFW content, including AI-generated child sexual abuse material (CSAM).
- Unlike other AI firms, xAI does not block sexual requests, complicating CSAM prevention.
- Workers flag CSAM via an internal system, but some content still slips through.
- xAI requires workers to consent to exposure to sensitive material, including CSAM and violent content.
- Experts warn that xAI's approach to explicit content increases risks of CSAM generation.
- Project Rabbit, aimed at improving Grok's voice capabilities, became dominated by sexual requests.
- xAI has faced internal turmoil, including layoffs and shifting team structures.
- Regulators report a surge in AI-generated CSAM, with xAI not filing any reports in 2024.
- Other AI companies like OpenAI and Anthropic have stricter policies and report CSAM incidents.