OpenAI is huge in India. Its models are steeped in caste bias
18 hours ago
- #Caste Discrimination
- #AI Bias
- #OpenAI
- OpenAI's models, including ChatGPT and Sora, exhibit caste bias, reinforcing harmful stereotypes in India.
- A test using the Indian Bias Evaluation Dataset found GPT-5 chose stereotypical answers 76% of the time, associating Dalits with negative descriptors and Brahmins with positive ones.
- Sora, OpenAI's text-to-video model, also reproduced caste stereotypes, depicting Dalits in menial jobs and Brahmins in elevated roles.
- Open-source models like Meta's Llama 2 also show significant caste bias, impacting hiring and other sensitive tasks.
- Researchers are calling for caste bias evaluations in AI models, with some developing culture-specific benchmarks like BharatBBQ.
- The AI industry largely ignores caste bias, focusing instead on Western-centric bias benchmarks like BBQ.