Thursday, January 15, 2026
No menu items!
Google search engine
HomeAI News and TrendsGoogle's AI is Trained by Underpaid Human Workers

Google’s AI is Trained by Underpaid Human Workers

Google relies on thousands of contractors to train the AI for its flagship chatbot Gemini. Many of these “AI raters,” who instruct the model and correct its numerous errors, face poor working conditions and often encounter highly disturbing content, according to The Guardian. This situation reveals that despite tech companies portraying their AI models as self-sufficient sources of intelligence set to replace human workers, the reality is quite different: AI depends heavily on the work of numerous hidden humans to appear intelligent.

AI raters globally are also tasked with labeling data for AI-driven applications like self-driving car software and more. “AI isn’t magic; it’s a pyramid scheme of human labor,” Adio Dinika from the Distributed AI Research Institute told the Guardian. “These raters are the middle rung: invisible, essential, and expendable.”

For large language models like Gemini, raters moderate AI outputs, ensuring response accuracy and filtering inappropriate content. Rachel Sawyer, a “generalist rater” for Google, expressed her shock at having to handle “distressing content” without prior warning or consent forms during onboarding.

Raters without specialized expertise are made to verify information in complex fields such as architecture, astrophysics, and medical guidance. One rater, tasked with sensitive medical topics, felt pressured to work faster while imagining the emotional impact on people searching online for cancer treatment information.

A Google spokesperson stated that “quality raters” provide external feedback on products, helping measure system performance but not directly affecting algorithms or models. GlobalLogic, a Japanese contractor, hired thousands of US raters, who earned $16 to $21 per hour, a wage higher than those paid to their African counterparts, yet deemed insufficient given the job’s demands and emotional toll.

Raters feel underpaid for work that could significantly contribute to developing AI models, which some argue are unnecessary. They often face tight deadlines and shifting guidelines, with no clarity on the use or purpose of their work.

This insight underscores the vast human effort behind technology marketed as a revolutionary alternative to human labor, even as AI models like Google’s Gemini continue to err on simple prompts despite rater intervention.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!
- Advertisment -
Google search engine

Most Popular

Recent Comments