When I first read about content moderators spoke of psychological trauma in moderating Big Tech’s content for training models 2 weeks ago, I waited for the other shoe to drop. Instead, aside from a BBC mention related to Facebook, the whole thing seems to have dropped off the radar of the media.
The images pop up in Mophat Okinyi’s mind when he’s alone, or when he’s about to sleep.
Okinyi, a former content moderator for Open AI’s ChatGPT in Nairobi, Kenya, is one of four people in that role who have filed a petition to the Kenyan government calling for an investigation into what they describe as exploitative conditions for contractors reviewing the content that powers artificial intelligence programs.
“It has really damaged my mental health,” said Okinyi.
The 27-year-old said he would would view up to 700 text passages a day, many depicting graphic sexual violence. He recalls he started avoiding people after having read texts about rapists and found himself projecting paranoid narratives on to people around him. Then last year, his wife told him he was a changed man, and left. She was pregnant at the time. “I lost my family,” he said.
“‘It’s destroyed me completely’: Kenyan moderators decry toll of training of AI models“, Niamh Rowe, The Guardian, August 2nd, 2023.
I expected more on this because it’s… well, it’s terrible to consider, especially for $1.46 and $3.74 an hour through Sama. Sama is a data annotation services company headquartered in California that employs content moderators around the world. As their homepage says, “25% of Fortune 50 companies trust Sama to help them deliver industry-leading ML models”.
Thus, this should be a bigger story, I think, but since it’s happening outside of the United States and Europe, it probably doesn’t score big with the larger media houses. The BBC differs a little in that regard.
A firm which was contracted to moderate Facebook posts in East Africa has said with hindsight it should not have taken on the job.
Former Kenya-based employees of Sama – an outsourcing company – have said they were traumatised by exposure to graphic posts.
Some are now taking legal cases against the firm through the Kenyan courts.
Chief executive Wendy Gonzalez said Sama would no longer take work involving moderating harmful content.
“Firm regrets taking Facebook moderation work“, Chris Vallance, BBC News, August 15th 2023.
The CEO of Sama says that they won’t be taking further work related to harmful content. The question then becomes whether something is harmful content or not, so there’s no doubt in my mind that Sama is in a difficult position itself. She points out that Sama has ‘lifted 65,000 people out of poverty’.
Of course, global poverty is decreasing while economic disparity is increasing – something that keeps being forgotten and says much about how the measurement of global poverty is paralyzed while the rest of the world moves on.
The BBC article also mentions the OpenAI issue mentioned in The Guardian article mentioned above.
We have global poverty, economic disparity, big tech and the dirty underbelly of AI training models and social media moderation…
This is something we should all be following up on, I think. It seems like ‘lifting people out of global poverty’ is big business, in it’s own way, too, and that is just a little bit disturbing.