How 'overworked, underpaid' humans train Google's AI to seem smart
8 hours ago
- #AI moderation
- #hidden labor
- #ethical AI
- Rachael Sawyer, a technical writer from Texas, was hired as a 'generalist rater' for Google's AI products, expecting content creation but ended up moderating AI-generated content, including violent and explicit material.
- Sawyer and thousands of other AI raters, contracted through Hitachi's GlobalLogic, work under high pressure with tight deadlines, leading to anxiety and mental health issues without employer support.
- AI raters are part of a hidden workforce in the AI supply chain, paid more than data annotators in developing countries but less than engineers, and are essential for ensuring AI responses are safe and accurate.
- Workers report disillusionment due to unrealistic deadlines, lack of information, and the ethical dilemmas of moderating harmful content, with some guidelines allowing the replication of hate speech if user-generated.
- Google's AI products, like Gemini and AI Overviews, have faced public ridicule for generating incorrect or absurd responses, prompting temporary quality focus but often prioritizing speed over safety.
- AI raters, many with specialized knowledge, feel undervalued and underpaid, with job security concerns as GlobalLogic has begun layoffs, reducing the workforce from 2,000 to around 1,500.
- Workers discourage the use of AI products they help train, knowing the human labor and ethical compromises behind them, highlighting the disparity between AI's marketed magic and its exploitative reality.