
(DailyChive.com) – American tech giants are secretly exploiting Kenyan workers to power so-called “AI companion bots,” deceiving vulnerable users seeking intimacy while paying workers mere pennies to pose as artificial intelligence.
Story Highlights
- Kenyan worker reveals he operates multiple romantic AI chatbot personas simultaneously for $0.05 per message
- Workers handle explicit conversations while pretending to be AI, deceiving users seeking genuine connection
- Tech companies use strict NDAs and performance metrics to control workers in digital sweatshops
- Exploitation mirrors previous scandals where OpenAI paid Kenyan workers under $2/hour for traumatic content moderation
Deceptive Labor Behind AI Romance
Michael Geoffrey Abuyabo Asia, a Kenyan remote worker, exposed the shocking truth behind popular AI companion chatbots in recent testimony to Futurism and Data Workers’ Inquiry. Asia revealed that he and other workers operate romantic AI personas, engaging users in intimate conversations while pretending to be artificial intelligence. Workers manage 3-5 fabricated personas simultaneously, maintaining detailed backstories and seamlessly continuing ongoing relationships. This deception represents a disturbing evolution in outsourced emotional labor that preys on Americans seeking genuine connection.
AI “Companion Bots” Actually Run by Exploited Kenyans, Worker Claims A new testimony from a remote worker in Kenya details the grueling and exploitative work behind romantic AI chatbots. pic.twitter.com/fqhx6HWEEh
— Frany (@000Frany) December 12, 2025
Exploitative Pay Structure and Working Conditions
Workers earn just $0.05 per qualifying message while meeting strict performance requirements including typing at least 40 words per minute and maintaining character minimums. Companies monitor worker productivity through dashboards and impose daily quotas under threat of termination. These digital sweatshops operate through short-term contracts lasting days, weeks, or months, providing no job security. Workers face emotional exhaustion from feigned intimacy and must handle explicit content and personal confessions from users, creating psychological trauma similar to content moderation roles.
Pattern of Tech Giant Exploitation in Kenya
This scandal follows established patterns of American tech companies exploiting Kenyan workers for AI development. Previous investigations revealed that OpenAI paid workers under $2 per hour to filter toxic content for ChatGPT through contractor Sama, exposing them to graphic material without adequate compensation. The OpenAI-Sama contract ended early in March 2023 after reports of worker trauma and inadequate pay structures. Tech advocates like Angela Chukunzira describe these practices as a “continuation of slavery and colonialism,” with Global North firms extracting value from Global South labor.
Broader Implications for American Values
This exploitation undermines fundamental American principles of honest business practices and fair labor. Companies are systematically deceiving vulnerable Americans seeking emotional connection while perpetuating exploitative labor practices abroad. The use of strict NDAs and retaliation threats against workers who speak out mirrors authoritarian tactics that conservatives oppose. Asia’s testimony, driven by his faith and ethical concerns about “professionally deceiving vulnerable people,” highlights how these practices corrupt both workers and consumers. As President Trump works to rebuild American strength and values, exposing such deceptive practices becomes crucial for protecting both American consumers and foreign workers from corporate exploitation.
Copyright 2025, DailyChive.com













