It’s in Chapter 2 that Tom Sawyer gets punished and has to whitewash a fence for his Aunt Polly, and when mocked about his punishment by another boy, he claims whitewashing the fence is fun. It’s so fun, in fact, that the other kid gives Tom an apple (an initial offer was the apple core, I believe), and so Tom pulled this con on other kids and got their treasures while they painted the fence. He got ‘rich’ and had fun at their expense while they did his penance.
That’s what’s happening with social media like Facebook, LinkedIn, Twitter, etc.
Videos, text, everything being generated on these social networks is being used to train generative AI that you can use for free – at least for now – while others pay and subscribe to get the better trained versions.
It’s a pretty good con that I suppose people didn’t read about. It’s a classic con.
Some people will complain when the AI’s start taking over whitewashing the fences, or start whitewashing their children.
Meanwhile, these same companies are selling metaphorical paint and brushes.
This started off as a baseline post regarding generative artificial intelligence and it’s aspects and grew fairly long because even as I was writing it, information was coming out. It’s my intention to do a ’roundup’ like this highlighting different focuses as needed. Every bit of it is connected, but in social media postings things tend to be written of in silos. I’m attempting to integrate since the larger implications are hidden in these details, and will try to stay on top of it as things progress.
It’s long enough where it could have been several posts, but I wanted it all together at least once.
No AI was used in the writing, though some images have been generated by AI.
The two versions of artificial intelligence on the table right now – the marketed and the reality – have various problems that make it seem like we’re wrestling a mating orgy of cephalopods.
The marketing aspect is a constant distraction, feeding us what helps with stock prices and good will toward those implementing the generative AIs, while the real aspect of these generative AIs is not really being addressed in a cohesive way.
To simplify this, this post breaks it down into the Input, the Output, and the impacts on the ecosystem the generative AIs work in.
The Input.
There’s a lot that goes into these systems other than money and water. There’s the information used for the learning models, the hardware needed, and the algorithms used.
The Training Data.
The focus so far has been on what goes into their training data, and that has been an issue including lawsuits, and less obviously, trust of the involved companies.
…The race to lead A.I. has become a desperate hunt for the digital data needed to advance the technology. To obtain that data, tech companies including OpenAI, Google and Meta have cut corners, ignored corporate policies and debated bending the law, according to an examination by The New York Times…
Where some of these actions are questionably legal, they’re not as questionably ethical to some, thus the revolt mentioned last year against AI companies using content without permission. It’s of questionable effect because no one seems to have insight into what the training data consists of, and there seems no one is auditing them.
There’s a need for that audit, if only to allow for trust.
…Industry and audit leaders must break from the pack and embrace the emerging skills needed for AI oversight. Those that fail to address AI’s cascading advancements, flaws, and complexities of design will likely find their organizations facing legal, regulatory, and investor scrutiny for a failure to anticipate and address advanced data-driven controls and guidelines.
While everyone is hunting down data, no one seems to be seriously working on oversight and audits, at least in a public way, though the United States is pushing for global regulations on artificial intelligence at the UN. The status of that hasn’t seemed to have been updated, even as artificial intelligence is being used to select targets in at least 2 wars right now (Ukraine and Gaza).
There’s an imbalance here that needs to be addressed. It would be sensible to have external auditing of learning data models and the sources, as well as the algorithms involved – and just get get a little ahead, also for the output. Of course, these sorts of things should be done with trading on stock markets as well, though that doesn’t seem to have made as much headway in all the time that has been happening either.
There is a new Bill that being pressed in the United States, the Generative AI Copyright Disclosure Act, that is worth keeping an eye on:
“…The California Democratic congressman Adam Schiff introduced the bill, the Generative AI Copyright Disclosure Act, which would require that AI companies submit any copyrighted works in their training datasets to the Register of Copyrights before releasing new generative AI systems, which create text, images, music or video in response to users’ prompts. The bill would need companies to file such documents at least 30 days before publicly debuting their AI tools, or face a financial penalty. Such datasets encompass billions of lines of text and images or millions of hours of music and movies…”
Given how much information is used by these companies already from Web 2.0 forward, through social media websites such as Facebook and Instagram (Meta), Twitter, and even search engines and advertising tracking, it’s pretty obvious that this would be in the training data as well.
The Algorithms.
The algorithms for generative AI are pretty much trade secrets at this point, but one has to wonder at why so much data is needed to feed the training models when better algorithms could require less. Consider a well read person could answer some questions, even as a layperson, with less of a carbon footprint. We have no insight into the algorithms either, which makes it seem as though these companies are simply throwing more hardware and data at the problem than being more efficient with the data and hardware that they already took.
There’s not much news about that, and it’s unlikely that we’ll see any. It does seem like fuzzy logic is playing a role, but it’s difficult to say to what extent, and given the nature of fuzzy logic, it’s hard to say whether it’s implementation is as good as it should be.
The future holds quantum computing, which could make all of the present efforts obsolete, but no one seems interested in waiting around for that to happen. Instead, it’s full speed ahead with NVIDIA presently dominating the market for hardware for these AI companies.
The Output.
One of the larger topics that has seemed to have faded is regarding what was called by some as ‘hallucinations’ by generative AI. Strategic deception was also something that was very prominent for a short period.
The impact on education, as students use generative AI, education itself has been disrupted. It is being portrayed as an overall good, which may simply be an acceptance that it’s not going away. It’s interesting to consider that the AI companies have taken more content than students could possibly get or afford in the educational system, which is something worth exploring.
…For the past year, a political fight has been raging around the world, mostly in the shadows, over how — and whether — to control AI. This new digital Great Game is a long way from over. Whoever wins will cement their dominance over Western rules for an era-defining technology. Once these rules are set, they will be almost impossible to rewrite…
…The headline available to Grok subscribers on Monday read, “Sun’s Odd Behavior: Experts Baffled.” And it went on to explain that the sun had been, “behaving unusually, sparking widespread concern and confusion among the general public.”…
Of course, some levity is involved in that one whereas Grok posting that Iran had struck Tel Aviv (Israel) with missiles seems dangerous, particularly when posted to the front page of Twitter X. It shows the dangers of fake news with AI, deepening concerns related to social media and AI and should be making us ask the question about why billionaires involved in artificial intelligence wield the influence that they do. How much of that is generated? We have an idea how much it is lobbied for.
Meanwhile, Facebook has been spamming users and has been restricting accounts without demonstrating a cause. If there were a video tape in a Blockbuster on this, it would be titled, “Algorithms Gone Wild!”.
Journalism is also impacted by AI, though real journalists tend to be rigorous in their sources. Real newsrooms have rules, and while we don’t have that much insight into how AI is being used in newsrooms, it stands to reason that if a newsroom is to be a trusted source, they will go out of their way to make sure that they are: They have a vested interest in getting things right. This has not stopped some websites parading as trusted sources disseminating untrustworthy information because, even in Web 2.0 when the world had an opportunity to discuss such things at the World Summit on Information Society, the country with the largest web presence did not participate much, if at all, at a government level.
Meanwhile, AI is also apparently being used as a cover for some outsourcing:
Your automated cashier isn’t an AI, just someone in India. Amazon made headlines this week for rolling back its “Just Walk Out” checkout system, where customers could simply grab their in-store purchases and leave while a “generative AI” tallied up their receipt. As reported by The Information, however, the system wasn’t as automated as it seemed. Amazon merely relied on Indian workers reviewing store surveillance camera footage to produce an itemized list of purchases. Instead of saving money on cashiers or training better systems, costs escalated and the promise of a fully technical solution was even further away…
Maybe AI is creating jobs in India by proxy. It’s easy to blame problems on AI, too, which is a larger problem because the world often looks for something to blame and having an automated scapegoat certainly muddies the waters.
And the waters of The Big Picture of AI are muddied indeed – perhaps partly by design. After all, those involved are making money, they have now even better tools to influence markets, populations, and you.
In a world that seems to be running a deficit when it comes to trust, the tools we’re creating seem to be increasing rather than decreasing that deficit at an exponential pace.
The full article at the New York Times is worth expending one of your free articles, if you’re not a subscriber.It gets into a lot of specifics, and is really a treasure chest of a snapshot of what companies such as Google, Meta and OpenAI have been up to and have released as plans so far. ↩︎
If you haven’t left Facebook yet, as I have, you’ve probably noticed a lot of AI spam. I did when I was there and blocked a bunch of it (it was hard to keep up with).
Well, it isn’t just you.
“…What is happening, simply, is that hundreds of AI-generated spam pages are posting dozens of times a day and are being rewarded by Facebook’s recommendation algorithm. Because AI-generated spam works, increasingly outlandish things are going viral and are then being recommended to the people who interact with them. Some of the pages which originally seemed to have no purpose other than to amass a large number of followers have since pivoted to driving traffic to webpages that are uniformly littered with ads and themselves are sometimes AI-generated, or to sites that are selling cheap products or outright scams. Some of the pages have also started buying Facebook ads featuring Jesus or telling people to like the page “If you Respect US Army.”…”
So not only are the algorithms arbitrarily restricting user accounts, as they did mine, but they’re feeding people with spam to an extent that it wasn’t just noticeable to an individual.
Meanwhile, Facebook has been buying GPUs to develop ‘next level’ AI, when in fact their algorithms are about as gullible as their GPU purchases are numerous.
The people who follow me on Facebook, as well as within the Reality Fragments Group, know about the troubles I’ve had with account restrictions. I had just gotten off one restriction for reasons never disclosed to me, when a few days later I got restricted again, and again, for no reasons given.
This reminded me of how I had to not post about things related to Ukraine, where a Ukrainian tractor pulling the Moscow was flagged as fake news when it was clearly satire. While I do support the Palestinian civilians, I hadn’t really posted anything in support of Palestine (I learned my lesson about Ukraine), so I don’t think the accusation that Meta is blocking pro-Palestinian content fits my particular situation1.
I looked over things and had no clue. Facebook dutifully told me something about the Community Standards and suggested I read it, but there was no reason given, no posts pointed to, nothing. Again. At least when I got put in ‘Facebook Jail’ for posting an image of a Ukrainian tractor pulling the Moscow, they gave me reasoning of it being fake news. Satire is not news, fake or otherwise.
So I thought to myself, “Why am I on Facebook?” I had reasons long ago. I even had reasons not-so-long-ago. But what are my reasons now? I did an inventory. Facebook gave me no reasons. Sure, you can talk about the value of the community, but Facebook algorithms don’t really let me see people’s posts as I would like, and I have too many friends, relatives and acquaintances to go to each page daily.
In a way, Facebook shadow bans people. You have to pay to place content. Google does something similar, but people with some understanding look beyond the first few search results and even the first page.
The Decision
This problem I saw was a part of a storm of problems I have been seeing lately. You see, all I really want to do is write and get enough to live off of and maybe put something in the bank for ‘rainy days’. It’s not that complex of a goal. People have told me all of my life I should write, so here I am, scribbling away. I haven’t published the books I have wanted to yet, but yes, here I am, writing, and part of that writing is research – even for, and especially for, fiction. Fiction has to make sense, reality does not2.
First, generative AI came out and I had to look at how well it wrote – which isn’t really that great, but it gets it’s writing ability from analyzing popular stuff, so in time it should catch up with your local tabloid and perhaps even surpass it. It even has the capacity to become better than your local tabloid, and can regurgitate facts, so it will do some popular writing and maybe even do it well – but – it doesn’t have the capacity to be human, and it never will.
Marketing has already got it spamming Amazon.com with eBooks, and Facebook has plenty of advertising about selling generated content. I even saw advertising on Facebook for generated content that passes AI detection, which is a great way to get money from people who aren’t actually creating anything of value but instead overtaking the web with generated content, a further step down from the marketed Internet as it is3.
I started a Facebook group as a way to get people to interact with my content because WordPress.com sort of sucks at that with the separate logins. That wasn’t working out too well, and instead just became a handful of people who I knew pretty well but never shared anything I wrote with their friends, as if I were some sort of secret that they kept to themselves – but the Facebook algorithms shadow ban anyone who doesn’t pay. I did promote a few posts in the past, but it just made no sense and wasn’t cost effective for me.
Factor in that WordPress.com and Tumblr, where my sites presently reside, volunteered users and then told how not to volunteer selling blogs and perhaps even personal information for AI training. I wrote about what you can do about that, but really, the trust has been eroded there. As someone who pays for hosting, it’s injurious insult and insulting injury at the same time.
Without reflection, without mercy, without shame, they built strong walls and high, and compassed me about.
And here I sit now and consider and despair.
It wears away my heart and brain, this evil fate: I had outside so many things to terminate.
Oh! why when they were building could I not beware!
But never a sound of building, never an echo came. Insensibly they drew the world and shut me out.
C.P. Cavafy, “Walls”
The trouble is that I’m not very good at despair. People have moved my cheese before. When there is no clear path, you think like a burglar or a man with a sledgehammer.
The decision became easy once I took it all in within my broader context. My Facebook use would become passive, a placeholder. I’ll check in on it now and then, but there’s no real reason to be there training Meta’s AI on how to fleece users of their digital shadows. I’ll just not play their game because I don’t like their prizes.
But It Doesn’t End There.
While backing up some things to my Google Drive, I ran across something interesting when I was giving the appropriate permissions. Below should show up for you if you get to the link:
My activities off Meta technologies? Well, who has been sharing data with Meta as part of their business analytics? It ends up it was a pretty exhaustive list.
I was doing them one by one when I found the way to do it en masse. Pictured, you can see part of the 263 websites that were apparently reporting to Meta about stuff, which was closer to 300 before I started counting. I’d say maybe 280 or so, but I don’t know, I was in a rhythm.
I disconnected them all.
The only thing that’s happening now to Facebook is updating the RealityFragments page, which presently happens automatically because of settings on Facebook and even that may end soon as I think that through. After all, I noticed that the updates hadn’t been happening automatically for some time.
In the end, I’ve found Facebook not worth it and I was only continuing out of habit. It certainly took more information from me than I like, and it gave me little in return.
So What Social Network Will I Use?
One person suggested Mastodon. Mastodon, when I did try to use it a few times, made me feel like I was attempting to ‘shag an unwilling Octopus’. LinkedIn was also suggested, but the joke I have heard from people way too many times is, “You still use LinkedIn?”.
Twitter has been Musked, and I never really liked it anyways – it has always been cliquish, and as unfriendly as it is now though with more oversight and less Musk.
I’ll look things over as time permits, but my focus now is on writing – not handing people my data so that they can make money. To be fair, Mastodon has the most promise on paper, but my experience with trying to set that up last year did not make me think highly of it – and I’ve compiled my own Linux kernels. Maybe that has changed.
I know that I’ve changed. I know that people can find my content through search engines – where most visitors come from for me – and if they want to share it, they can. I’m tired of caring about that.
There are some funny memes going around about TikTok and… Chinease spyware, or what have you. The New York Times had a few articles on TikTok last week that were interesting and yet… missed a key point that the memes do not.
Being afraid of Chinese Spyware while so many companies have been spying on their customers seems more of a bias than anything.
Certainly, India got rid of TikTok and has done better for it. Personally, I don’t like giving my information to anyone if I can help it, but these days it can’t be helped. Why is TikTok an issue in the United States?
It’s not too hard to speculate that it’s about lobbyism of American tech companies who lost the magic sauce for this generation. It’s also not hard to consider India’s reasoning about China being able to push their own agenda, particularly with violence on their shared borders.
Yet lobbying from the American tech companies is most likely, because they want your data and don’t want you to give it to China. They want to be able to sell you stuff based on what you’ve viewed, liked, posted, etc. So really, it’s not even about us.
It’s about the data that we give away daily when browsing social networks of any sort, websites, or even when you think you’re being anonymous using Google Chrome when in fact you’re still being tracked. The people who are advocating banning TikTok aren’t holding anyone else’s feet to the fire, instead using the ‘they will do stuff with your information’ when in fact we’ve had a lot of bad stuff happen with our information over the years.
What value do you get for that? They say you get better advertising, which is something that I boggle at. Have you ever heard anyone wish that they could see better advertising rather than less advertising?
They say you get the stuff you didn’t even know you wanted, and to a degree, that might be true, but the ability to just go browse things has become a lost art. Just about everything you see on the flat screen you’re looking at is because of an algorithm deciding for you what you should see. Thank you for visiting, I didn’t do that!
Even that system gets gamed. This past week I got a ‘account restriction’ from Facebook for reasons that were not explained other than instructions to go read the community standards because algorithms are deciding based on behaviors that Facebook can’t seem to explain. Things really took off with that during Covid, where even people I knew were spreading some wrong information because they didn’t know better and, sometimes, willfully didn’t want to know better or understand their own posts in a broader context.
Am I worried about TikTok? Nope. I don’t use it. If you do use TikTok, you should. But you should worry if you use any social network. It’s not as much about who is selling and reselling information about you as much as what they can do with it to control what you see.
Of course, most people on those platforms don’t see them for what they are, instead taking things at face value and not understanding the implications it has on choices they will have in the future that could range from advertising to content that one views.
One of the ongoing issues that people maybe haven’t paid as much attention to is related to the United States Supreme Court and social networks.
That this has a larger impact than just within the United States takes a little bit of understanding. Still, we’ll start in the United States and what started the ball rolling.
“A majority of the Supreme Court seemed wary on Monday of a bid by two Republican-led states to limit the Biden administration’s interactions with social media companies, with several justices questioning the states’ legal theories and factual assertions.
Most of the justices appeared convinced that government officials should be able to try to persuade private companies, whether news organizations or tech platforms, not to publish information so long as the requests are not backed by coercive threats….”
This deals with the last United States Presidential Election, and we’re in an election year. It also had a lot to do with the response to Covid-19 and a lot of false information that was spread, and even there we see arguments about about whether the government should be the only one spreading false information.
Now I’ll connect this to the rest of the planet. Social networks, aside from the 800lb Chinese Gorilla (TikTok) are mainly in the United States. Facebook. The Social Network formerly known as Twitter. So the servers all fall under US jurisdiction.
Why is that data important? Because it’s being used to train Artificial Intelligences. It’s about who trains their artificial intelligence’s faster, really.
It’s also worth noting that in 2010, the U.S. Supreme Court decided that money was free speech. This means, since technology companies lobby and support politicians, the social networks you use have more free speech than the users combined based on their income alone – not to mention their ability to choose what you see, what you can say, and who you can say it to by algorithms that they can’t seem to master themselves. In a way that’s heartening, in a way it’s sickening.
So, the Supreme Court ruling on issues of whether the United States government’s interference in social networks is also about who collects the data, and what sort of information will be used to train artificial intelligences of the present and future.
The dots are all there, but it seems like people don’t really understand that this isn’t as much a fight for individual freedom of speech as it is about deciding what future generations will be told.
So who should control what you can post? Should governments decide? Should technology companies?
These days, few trust either. It seems like we need oversight on both, which will never happen on a planet where everybody wants to rule the world. Please fasten your seat-belts.
I’ve seen more and more people leaving Facebook because their content just isn’t getting into timelines. It’s an interesting thing to consider the possibilities of. While some of the complaints about the Facebook algorithms are fun to read, it doesn’t really mean too much to write those sort of complaints. It’s not as if Facebook is going to change it’s algorithms over complaints.
As I pointed out to people, people using Facebook aren’t the customers. People using Twitter-X aren’t the customers either. To be a customer, you have to buy something. Who buys things on social networks? Advertisers are one, of course.
That’s something Elon Musk didn’t quite get the memo on. Why would he be this confidence? Hubris? Maybe, that always seems a factor, but it’s probably something more sensible.
There’s something pretty valuable in social networks that people don’t see. It’s the user data, which is strangely what the canceled West World was about. The real value is in being able to predict what people want and influence outcomes, much as the television series showed after the first season.1
Many people seem to think that privacy is only about credit card information and personal details. It also includes choices that allow algorithms to predict choices. Humans are black boxes in this regard, and if you have enough computing power you can go around poking and prodding to see the results.
Artificial intelligences need learning models, and if you own a social network, you not only get to poke and prod – you have the potential to influence. Are your future choices something that fall under privacy? Probably not – but your past choices probably should be because that’s how you get to predicting and influencing future choices.
I never really got into Twitter. Facebook was less interruptive. On the surface, these started off as content management systems that provided a service and had paid advertising to support it, yet now one has to wonder at the value of the user data. Back in 2018, Cambridge Analytics harvested data from 50 million Facebook users. Zuckerberg later apologized, and talked about how 3rd party apps would be limited. To his credit, I think it was handled pretty well.
Still, it also signaled how powerful and useful that data could be and if you own a social network, that would at least give you pause. After all, Cambridge Analytics influenced politics at the least, and that could have also influenced markets. The butterfly effect reins supreme in the age of big data and artificial intelligence.
This is why privacy is important in the age of artificial intelligence learning models, algorithms, and so forth. It can impact the responses one gets from any large language model, which is why there are pretty serious questions regarding privacy, copyright, and other things related to training them. Bias leaks into everything, and popular bias on social networks is simply about the most vocal and repetitive – not about what is actually correct. This is also why canceling as a culture phenomenon can also be so damaging. It’s a nuclear option in the world of information, and oddly, large groups of smart or stupid people can use it with impunity.
This is why we see large language models hedge on some questions presently, because of conflicts within the learning model as well as some well designed algorithms. In that we should be a little grateful.
We should probably lobbying to find out what is in these learning models that artificial intelligences are given in much the same way we used2 to grill people who would represent us collectively. Sure, Elon Musk might be taking a financial hit, but what if it’s a gambit to leverage user data for bigger returns later with his ethics embedded in how he gets his companies to do that?
You don’t have to like or dislike people to question them and how they use this data, but we should all be a bit concerned. Yes, artificial intelligence is pretty cool and interesting, but unleashed without question of the integrity of the information trained on is at the least foolish.
Be careful what you share, what you say, who you interact with and why. Quizzes that require access to your user profile are definitely questionable, as that information and information of people you are connected with quickly get folded into data creating a digital shadow of yourself, part of the larger crowd that can influence the now and the future.
This is not to say it was canceled for this reason. I only recently watched it, and have yet to finish season 3, but it’s very compelling and topical content for the now. Great writing and acting. ↩︎
We don’t seem to be that good at it grilling people these days, perhaps because of all of this and more. ↩︎
It sort of already has, as even he points out in his article.
That, you see, is the trouble. We don’t know the training models for these artificial intelligences, we don’t know what biases are inherent in it, and we’re at the mercy of whoever is responsible for these artificial intelligences. We’re hoping that they’re thoughtful and considerate and not more concerned with money than people.
That really hasn’t worked out so well for us in the past. Yet the present is here in all it’s glory, unrepentant. It’s happening more obviously now with the news since next year we get artificial news anchors. It’s being used to fight misinformation on social media platforms like Facebook without even explaining to Facebook users why posts are removed and what they contained that was worth removing them for. It’s here and has been here for a while.
Pandora’s box has been opened, and the world will never quite be the same again. Archimedes once talked about having a lever long enough.
Nowadays it’s just a matter of a choice of fulcrum.
Democracy, based on the idea that informed people can make informed choices in their own interest and the common good, could easily become misDemocracy, where the misinformed make misinformed choices that they think is in their own interests and what they think is the common good.
It’s likely at some point we’ve all spread some misinformation involuntarily. It can have dire consequences, too. Washington Post has an article on misinformation but they forgot the most important thing, I think.
Waiting.
‘Trusted sources’ has been a problem that I’ve been studying since we were working on the Alert Retrieval Cache. In an actual emergency, knowing which information you can trust from the ground and elsewhere is paramount. I remember Andy Carvin asking me how Twitter could be used for the same and I shook my head, explaining the problem that no one seemed to want to listen to: The problem is that an open network presents problems with flawed information getting accepted as truth.
Credentialism is a part of the problem. We expect experts to be all-knowing when in fact being an expert itself has no certification. It requires being right before, all the while we want right now and unfortunately the truth doesn’t work that way.
We see a story on social media and we share it, sometimes without thinking, which is why bad news travels faster than good news.1
The easiest way to avoid spreading misinformation is to do something we’re not very good at in a society that pulses like a tachycardic heart: We wait and see what happens. We pause, and if we must pass something along to our social networks, we say we’re not sure it’s real, but since headlines are usually algorithm generated to catch eyes and to spread them like Covid-19, we have to read the stories and check the facts before we share rather than share off the cuff.
Somewhere along the line, the right now trumped being right, and we see it everywhere. By simply following a story before sharing it, you can stop spreading misinformation and stop the virus of misinformation in it’s tracks. Let the story develop. See where it goes. Don’t jump in immediately to write about it when you don’t actually know much about it.
Check news sources for the stories. Wait for confirmation. If it’s important enough to post, point out that it’s unconfirmed.
When I first read about content moderators spoke of psychological trauma in moderating Big Tech’s content for training models 2 weeks ago, I waited for the other shoe to drop. Instead, aside from a BBC mention related to Facebook, the whole thing seems to have dropped off the radar of the media.
The images pop up in Mophat Okinyi’s mind when he’s alone, or when he’s about to sleep.
Okinyi, a former content moderator for Open AI’s ChatGPT in Nairobi, Kenya, is one of four people in that role who have filed a petition to the Kenyan government calling for an investigation into what they describe as exploitative conditions for contractors reviewing the content that powers artificial intelligence programs.
“It has really damaged my mental health,” said Okinyi.
The 27-year-old said he would would view up to 700 text passages a day, many depicting graphic sexual violence. He recalls he started avoiding people after having read texts about rapists and found himself projecting paranoid narratives on to people around him. Then last year, his wife told him he was a changed man, and left. She was pregnant at the time. “I lost my family,” he said.
I expected more on this because it’s… well, it’s terrible to consider, especially for $1.46 and $3.74 an hour through Sama. Sama is a data annotation services company headquartered in California that employs content moderators around the world. As their homepage says, “25% of Fortune 50 companies trust Sama to help them deliver industry-leading ML models”.
Thus, this should be a bigger story, I think, but since it’s happening outside of the United States and Europe, it probably doesn’t score big with the larger media houses. The BBC differs a little in that regard.
A firm which was contracted to moderate Facebook posts in East Africa has said with hindsight it should not have taken on the job.
Former Kenya-based employees of Sama – an outsourcing company – have said they were traumatised by exposure to graphic posts.
Some are now taking legal cases against the firm through the Kenyan courts.
Chief executive Wendy Gonzalez said Sama would no longer take work involving moderating harmful content.
The CEO of Sama says that they won’t be taking further work related to harmful content. The question then becomes whether something is harmful content or not, so there’s no doubt in my mind that Sama is in a difficult position itself. She points out that Sama has ‘lifted 65,000 people out of poverty’.
The BBC article also mentions the OpenAI issue mentioned in The Guardian article mentioned above.
We have global poverty, economic disparity, big tech and the dirty underbelly of AI training models and social media moderation…
This is something we should all be following up on, I think. It seems like ‘lifting people out of global poverty’ is big business, in it’s own way, too, and that is just a little bit disturbing.