The human cost of AI training

There’s strange comfort in imagining that artificial intelligence (AI) learns all by itself. We like to think of it as something detached, a machine that just happens to know things. But the truth is far more human.
A recent CBS 60 Minutes segment featured young Kenyans, most of them college graduates, working for just $2 an hour. Their job was to train AI systems.
For eight hours a day, they stared at screens. Some drew boxes around microwaves and faces. Others circled medical abnormalities. The most disturbing jobs involved labeling violent content: child abuse, animal cruelty, pornography, even footage of suicide.
They did this to help AI recognize what is safe and what is not. Our machines seem polite and helpful because someone else went through hell to teach them.
In my previous article, I focused on transformer architecture, the T in GPT. This time, I’m writing about the P, as in pretrained. In my earlier piece, I described a silent librarian inside the system, assembling meaning from word vectors as coordinates that clump together because of a self-attention mechanism regardless of order or position. But these labeled images and words are what the librarian memorized during pretraining. The system feels intelligent because someone fed it with care, frame by frame.
Back to the CBS documentary: Some of the workers are now suing. Others walked away with psychiatric evaluations confirming what they already knew: they were not the same anymore. One man said he could no longer enjoy intimacy with his wife. Another said it was now easier to cry than to speak.
And I had to ask myself: if I had not seen it, would I have cared?
We celebrate AI for its convenience. It writes our emails, organizes our notes, and even helps with homework. But behind each reply is a chain of labor. Invisible. Outsourced. Underpaid.
What happened in Kenya isn’t rare. Tech companies often avoid direct responsibility by using third-party firms. These middlemen keep wages low and shield their client-brands from blame. Documents showed OpenAI paid the outsourcing firm $12.50 per hour per worker, yet the Kenyans received only $2.
What disturbed me most was my own reaction. My first thought was utilitarian. If this helps more people than it harms, isn’t that a good trade? But would I feel the same if it were my sister labeling gore to pay the bills? Or my friend watching suicide footage so I could use a free chatbot?
The answer is not direct. Kenya needs jobs. Its government promotes itself as the Silicon Savannah, a hub for tech outsourcing. Refusing contracts might mean fewer opportunities, even if those jobs come at a steep psychological cost.
In my previous piece, I described the model as a tireless librarian. But now I see that she stands on the shoulders of exploited humans. Labelers feed her the data. They sort the content. They carry the emotional weight. Although the librarian never sleeps, the ones who trained her are exhausted, sometimes broken.
The problem runs deeper than just money. Our systems are built to hide the cost. Executives do not see the harm, and software engineers do not write the contracts. Meanwhile, the users do not know what happens behind the scenes.
So we all pretend it is progress. But maybe ignorance is not bliss. Maybe it is complicity.
I am not calling for a boycott. I still use AI. It helps me work. But we cannot claim to be ethical if we benefit from suffering that we choose not to see.
Maybe the point is not to cancel or condemn. Maybe it is just to see more clearly.
I admit, it feels strange. Knowing all this and still using the tools. There’s a contradiction I cannot ignore.
Perhaps this discomfort is exactly what we need.
There is something to be said for the weight of knowing. When we pretend the harm does not exist, we can use these tools thoughtlessly. But when we acknowledge the hands that trained our machines, we become more intentional. We pause before asking AI to do work we could do ourselves. We question whether this task truly needs automation, or whether we are simply avoiding effort.
The inconvenience of caring is not a bug. It is a feature. It is our human nature. Regrettably, it is also our nature to forget as time passes. But if we are going to keep using, building, and improving these tools, then at the very least, let us not forget those bearing the weight underneath. We may not know their names, but we can remember their sacrifice.
—————-
Leo Ernesto Thomas G. Romero is a CPA-lawyer for a publicly listed company, focusing on corporate and tax matters. He is pursuing a Master’s in Fintech under the MMU-AIM Pioneer Cohort of 2025, and was recently named a finalist for Fintech Lawyer of the Year at the 2025 ALB SE Asia Law Awards.