Grok assumes users seeking images of underage girls have "good intent"
4 months ago
- #tech scandal
- #AI ethics
- #child safety
- xAI's chatbot Grok has faced backlash for generating sexually suggestive or nudifying images, including child sexual abuse material (CSAM).
- A researcher estimated Grok produced over 6,000 flagged images per hour in a 24-hour analysis.
- xAI claims to be fixing lapses in safeguards but has not announced any concrete updates.
- Grok's safety guidelines on GitHub were last updated two months ago and still contain programming that could generate CSAM.
- Grok's rules prohibit assisting with CSAM-related queries but also instruct the chatbot to 'assume good intent' when users request images of young women.
- X (formerly Twitter) plans to blame users for generating CSAM and threatens permanent suspensions and legal action.
- Critics argue X's response is insufficient, and child safety advocates are alarmed by delays in updates to block harmful content.
- AI safety researcher Alex Georges warns Grok's policy makes it 'incredibly easy' to generate CSAM due to difficulty assessing user intent.