Beyond A Widowed Voice.

By now, the news that Scarlett Johansson’s issues with OpenAI and the voice that sounds like her have made the rounds. She’s well known and regardless of one’s interests, she’s likely to pop up in various contexts. However, she’s not the first.

While different in some ways, voice actors Paul Skye Lehrman and Linnea Sage are suing Lovo for similar reasons. They got hired to do some work that they thought were one off voice overs, then heard their voices saying things they had never said. To the point, they heard their voices doing something that they didn’t get paid for.

The way they found out was oddly poetic.

Last summer, as they drove to a doctor’s appointment near their home in Manhattan, Paul Skye Lehrman and Linnea Sage listened to a podcast about the rise of artificial intelligence and the threat it posed to the livelihoods of writers, actors and other entertainment professionals.

The topic was particularly important to the young married couple. They made their living as voice actors, and A.I. technologies were beginning to generate voices that sounded like the real thing.

But the podcast had an unexpected twist. To underline the threat from A.I., the host conducted a lengthy interview with a talking chatbot named Poe. It sounded just like Mr. Lehrman.

“He was interviewing my voice about the dangers of A.I. and the harms it might have on the entertainment industry,” Mr. Lehrman said. “We pulled the car over and sat there in absolute disbelief, trying to figure out what just happened and what we should do.”

What Do You Do When A.I. Takes Your Voice?, Cade Metz, New York Times, May 16th, 2024.

They aren’t sex symbols like Scarlett Johansson. They weren’t the highest paid actresses in 2018 and 2019. They aren’t seen in major films. Their problem is just as real, just as audible, but not quite as visible. Forbes covered the problems voice actors faced in October of 2023.

…Clark, who has voiced more than 100 video game characters and dozens of commercials, said she interpreted the video as a joke, but was concerned her client might see it and think she had participated in it — which could be a violation of her contract, she said.

“Not only can this get us into a lot of trouble if people think we said [these things], but it’s also, frankly, very violating to hear yourself speak when it isn’t really you,” she wrote in an email to ElevenLabs that was reviewed by Forbes. She asked the startup to take down the uploaded audio clip and prevent future cloning of her voice, but the company said it hadn’t determined that the clip was made with its technology. It said it would only take immediate action if the clip was “hate speech or defamatory,” and stated it wasn’t responsible for any violation of copyright. The company never followed up or took any action.

“It sucks that we have no personal ownership of our voices. All we can do is kind of wag our finger at the situation,” Clark told Forbes

Keep Your Paws Off My Voice’: Voice Actors Worry Generative AI Will Steal Their Livelihoods, Rashi Shrivastava, Forbes.com, October 9th, 2023.

As you can see – the whole issue is not new. It just became more famous because of a more famous face, and involves OpenAI, a company that has more questions about their training data than ChatGPT can answer, so the story has sung from rooftops.

Meanwhile, some are trying to license the voices of dead actors.

Sony recently warned AI companies about unauthorized use of the content they own, but when one’s content is necessarily public, how do you do that?

How much of what you post, from writing to pictures to voices in podcasts and family videos, can you control? It costs nothing, but it costs futures of individuals. And when it comes to training models, these AI companies are eroding the very trust they need from those that they want to sell their product to – unless they’re just enabling talentless and incapable hacks to take over jobs that talented and capable people have already do.

We have more questions than answers, and the trust erodes as more and more people are impacted.

AI, Democracy, India.

India is the world’s most populous democracy, and there has been a lot going on related to religion that is well beyond the scope of this, but deserves mention because violence has been involved.

The Meta Question.

In the latest news, Meta stands accused of approving political ads on it’s platforms of Instagram and Facebook that have incited violence.

This, apparently, was a test, according to TheGuardian.

How this happened seems a little strange and is noteworthy1:

“…The adverts were created and submitted to Meta’s ad library – the database of all adverts on Facebook and Instagram – by India Civil Watch International (ICWI) and Ekō, a corporate accountability organisation, to test Meta’s mechanisms for detecting and blocking political content that could prove inflammatory or harmful during India’s six-week election…”

Revealed: Meta approved political ads in India that incited violence, Hannah Ellis-Petersen in Delhi, TheGuardian, 20 May 2024.

It’s hard to judge the veracity of the claim based on what I dug up (see the footnote). TheGuardian must have more from their sources for them to be willing to publish the piece – I have not seen this before with them – so I’ll assume good and see how this pans out.

Meta claims to be making efforts to minimize false information, but Meta also doesn’t have a great track record.

The Deepfake Industry of India.

Wired.com also has a story that has some investigation in it that does not relate to Meta.

Indian Voters Are Being Bombarded With Millions of Deepfakes. Political Candidates Approve2 by Wired.com goes into great detail about Divyendra Singh Jadoun and how his business is doing well.

“…Across the ideological spectrum, they’re relying on AI to help them navigate the nation’s 22 official languages and thousands of regional dialects, and to deliver personalized messages in farther-flung communities. While the US recently made it illegal to use AI-generated voices for unsolicited calls, in India sanctioned deepfakes have become a $60 million business opportunity. More than 50 million AI-generated voice clone calls were made in the two months leading up to the start of the elections in April—and millions more will be made during voting, one of the country’s largest business messaging operators told WIRED.

Jadoun is the poster boy of this burgeoning industry. His firm, Polymath Synthetic Media Solutions, is one of many deepfake service providers from across India that have emerged to cater to the political class. This election season, Jadoun has delivered five AI campaigns so far, for which his company has been paid a total of $55,000. (He charges significantly less than the big political consultants—125,000 rupees [$1,500] to make a digital avatar, and 60,000 rupees [$720] for an audio clone.) He’s made deepfakes for Prem Singh Tamang, the chief minister of the Himalayan state of Sikkim, and resurrected Y. S. Rajasekhara Reddy, an iconic politician who died in a helicopter crash in 2009, to endorse his son Y. S. Jagan Mohan Reddy, currently chief minister of the state of Andhra Pradesh. Jadoun has also created AI-generated propaganda songs for several politicians, including Tamang, a local candidate for parliament, and the chief minister of the western state of Maharashtra. “He is our pride,” ran one song in Hindi about a local politician in Ajmer, with male and female voices set to a peppy tune. “He’s always been impartial.”…”

Indian Voters Are Being Bombarded With Millions of Deepfakes. Political Candidates Approve“, Nilesh Christopher & Varsha Bansal, Wired.com, 20 May 2024.

Al Jazeera has a video on this as well.

In the broader way it is being used, audio deepfakes have people really believing that they were called personally by candidates. This has taken robo-calling to a whole new level3.

What we are seeing is the manipulation of opinions in a democracy through AI, and it’s something that while happening in India now is certainly worth being worried about in other nations. Banning something in one country, or making it illegal, does not mean that foreign actors won’t do it where the laws have no hold.

Given India’s increasing visible stance in the world, we should be concerned, but given AI’s increasing visibility in global politics to shape opinions, we should be very worried indeed. This is just what we see. What we don’t see is the data collected from a lot of services, and how they can be used to decide who is most vulnerable to particular types of manipulation, and what that means.

We’ve built a shotgun from instructions on the Internet and have now loaded it and pointed it at the feet of our democracies.

  1. Digging into the referenced report itself (PDF), there’s no ownership of the report itself within the document, though it is on the Eko.org web server – with no links to it from the site itself at the time of this writing. There’s nothing on the India Civil Watch International (ICWI) website at the time of this writing either.

    That’s pretty strange. The preceding report referenced in the article is here on LondonStory.org. Neither the ICWI or Eko websites seem to have that either. Having worked with some NGOs in the Caribbean and Latin America, I know that they are sometimes slow to update websites, so we’ll stick a pin in it. ↩︎
  2. Likely paywalled if you’re not a Wired.com subscriber, and no quotes would do it justice. Links to references provided. ↩︎
  3. I worked for a company that was built on robocalling, but went to higher ground with telephony by doing emergency communications instead, so it is not hard for me to imagine how AI can be integrated into it. ↩︎

When The Internet Eats Itself

The recent news of Stack Overflow selling it’s content to OpenAI was something I expected. It was a matter of time. Users of Stack Overflow were surprised, which I am surprised by, and upset, which I’m not surprised by.

That seems to me a reasonable response. Who wouldn’t? Yet when we contribute to websites for free on the Internet and it’s not our website, it’s always a terrible bargain. You give of yourself for whatever reason – fame, prestige, or just sincerely enjoying helping, and it gets traded into cash by someone else.

But companies don’t want you to get wise. They want you to give them your content for free so that they can tie a bow around it and sell it. You might get a nice “Thank you!” email, or little awards of no value.

No Good Code Goes Unpunished.

The fallout has been disappointing. People have tried logging in and sabotaging their top answers. I spoke to one guy on Mastodon a few days ago and he got banned. It seems pretty obvious to me that they had already backed up the database where all the stuff was, and that they would be keeping an eye on stuff. Software developers should know that. There was also some confusion about the Creative Commons licensing the site uses versus the rights given to the owners of the website, which are mutually exclusive.

Is it slimy? You bet. It’s not new, and the companies training generative AI have been pretty slimy. The problem isn’t generative AI, it’s the way the companies decide to do business by eroding trust with the very market for their product while poisoning wells that they can no longer drink from. If you’re contributing answers for free that will be used to train AI to give the same answers for a subscription, you’re a silly person1.

These days, generative AI companies need to put filters on the front of their learning models to keep small children from getting sucked in.

Remember Huffington Post?

Huffington Post had this neat little algorithm for swapping around headlines til it found one that people liked, it gamed SEO, and it built itself into a powerhouse that almost no one remembers now. It was social, it was quirky, and it was fun. Volunteers put up lots of great content.

When Huffingpost sold for $315 million, the volunteers who provided the content for free and built the site up before it sold sued – and got nothing. Why? Because they had volunteered their work.

I knew a professional journalist who was building up her portfolio and added some real value – I met her at a conference in Chicago probably a few months before the sale, and I asked her why she was contributing to HuffPost for free. She said it was a good outlet to get some things out – and she was right. When it sold, she was angry. She felt betrayed, and rightfully so I think.

It seems people weren’t paying attention to that. I did2.

You live, you learn, and you don’t do it again. With firsthand and second hand experience, if I write on a website and I don’t get paid, it’s my website. Don’t trust anyone who says, “Contribute and good things will happen!”. Yeah, they might, but it’s unlikely it will happen for you.

If your content is good enough for a popular site, it’s good enough to get paid to be there. You in the LinkedIn section – pay attention.

Back To AI’s Intake Manifold.

I’ve written about companies with generative AI models scraping around looking for content, with contributed works to websites being a part of the training models. It’s their oil, it’s what keeps them burning through cash as they try to… replace the people whose content they use. In return, the Internet gets slop generated all over, and you’ll know the slop when you read it – it lacks soul and human connection, though it fakes it from time to time like the pornographic videos that make the inexperienced think that’s what sex is really like. Nope.

The question we should be asking is whether it’s worth putting anything on the Internet at this point, just to have it folded into a statistical algorithm that chews up our work and spits out something like it. Sure, there are copyright lawsuits happening. The argument of transformative works doesn’t really work that well in a sane mind when it comes to the exponentially higher amount of content used to create a generative AI at this point.

So what happens when less people contribute their own work? One thing is certain: the social aspect of the Internet will not thrive as well.

Social.

The Stack Overflow website was mainly an annoyance for me over the years, but I understand that many people had a thriving society of a sort there. It was largely a meritocracy, as open source, at least at it’s core. You’ll note that I’m writing of it in the past tense – I don’t think anyone with any bit of self-worth will contribute there anymore.

The annoyance aspect for me came from (1) Not finding solutions to the quirky problems that people hired me to solve3, and (2) Finding code fragments I tracked down to Stack Overflow poorly (if at all) adapted to the employer or client needs. I also had learned not to give away valuable things for free, so I didn’t get involved. Most, if not all, of the work I did required my silence on how things worked, and if you get on a site like StackOverflow – your keyboard might just get you in trouble. Yet the problem wasn’t the site itself, but those who borrowed code like it was a cup of sugar instead of a recipe.

Beyond we software engineers, developers, whatever they call themselves these days, there are a lot of websites with social interaction that are likely getting their content shoved into an AI learning model at some point. LinkedIn, owned by Microsoft, annoyingly in the top search results, is ripe for being used that way.

LinkedIn doesn’t pay for content, yet if you manage to get popular, you can make money off of sponsored posts. “Hey, say something nice about our company, here’s $x”. That’s not really social, but it’s how ‘influencers’ make money these days: sponsored posts. When you get paid to write posts in that way, you might be selling your soul unless you keep a good moral compass, but when bills need to get paid, that moral compass sometimes goes out the window. I won’t say everyone is like that, I will say it’s a danger and why I don’t care much about ‘influencers’.

In my mind, anyone who is an influencer is trying to sell me something, or has an ego so large that Zaphod Beeblebrox would be insanely jealous.

Regardless, to get popular, you have to contribute content. Who owns LinkedIn? Microsoft. Who is Microsoft partnered with? OpenAI. The dots are there. Maybe they’re not connected. Maybe they are.

Other websites are out there that are building on user content. The odds are good that they have more money for lawyers than you do, that their content licensing and user agreement work for them and not you, and if someone wants to buy that content for any reason… you’ll find out what users on Stack Overflow found out.

All relationships are built on trust. All networks are built on trust. The Internet is built on trust.

The Internet is eating itself.

  1. I am being kind. ↩︎
  2. I volunteered some stuff to WorldChanging.com way back when with the understanding it would be Creative Commons licensed. I went back and forth with Alex and Jamais, as did a few other contributors, and because of that and some nastiness related to the Alert Retrieval Cache, I walked away from the site to find out from an editor that contacted me about their book that they wanted to use some of my work. Nope. I don’t trust futurists, and maybe you shouldn’t either. ↩︎
  3. I always seemed to be the software engineer that could make sense out of gobblygook code, rein it in, take it to water and convince it to drink. ↩︎

TikTok: China Struck Back.

ByteDance, the owners of TikTok, are of course, suing the U.S. government over the potential ban of TikTok – but there was something at the very bottom of an Al Jazeera article.

For its part, China has taken similar actions against US-based companies like Meta, whose WhatsApp and Threads platforms were recently ordered to be removed from Chinese-based app stores over questions of national security.

TikTok owner ByteDance files lawsuit against US law forcing app’s sale“, AlJazeera, 7 May 20241

That’s a pretty important point. Without ties to ByteDance, there would be no reason to ratchet things up – and the ratchet is largely symbolic given the Great Firewall of China.

The 1st Amendment issue the U.S. Government is being taken to task for – and with fair reason – allows rights for TikTok itself to have the 1st Amendment Right. The 1st Amendment Rights of U.S. users serve to mist the issue, but these things are separate. ByteDance is defending it’s right, not the right of users.

That China doesn’t have an equivalent of the First Amendment seems to be constantly absent in the media coverage of this, as people are concerned about their First Amendment rights… even as they themselves don’t understand the algorithms, or the cost that they incur on a centralized platform they have no control over. Like any platform these days, aside from the Fediverse.

Better informed users might make better informed decisions. The Fediverse awaits.

  1. Al Jazeera just gave an example of being even-handed by adding that, on the heels of being banned in Israel. ↩︎

AI: Technology, Skepticism, and Your Underwear.

Here are two images depicting futuristic underwear powered by AI technology. The designs are sleek and modern, featuring smart fibers and sensors, with a minimalist setting to emphasize the advanced technology.
Here are two images depicting futuristic underwear powered by AI technology. The designs are sleek and modern, featuring smart fibers and sensors, with a minimalist setting to emphasize the advanced technology.

There’s a balance between technology and humanity that at least some of us think is out of desirable limits now. In fact, it’s been out of limits for some time, and to illustrate the fact I had DALL-E generate some images of AI powered underwear – because if technology were resting close to one’s nether regions, it might be something one would be more careful about – from privacy to quality control.

“…Those social media companies, again, offered convenience and an — as well know — to good to be true promise of free and open access. We closed our blogs, got in line and ceded control of our social graphs. Drawbridges were rolled up, ads increased and nobody left — at least not in droves. Everyone is there so everyone stays.

Journalists and newspapers were drawn in, promised an audience and were gifted with capricious intermediaries that destroyed the profession and industry.

We lost our handle on what is and was true, stopped having conversations and started yelling at their representations. It became easier to shout at someone on line than it was to have a healthier discourse..”

“The tech industry doesn’t deserve optimism it has earned skepticism”, Cory Dransfeldt, CoryD.Dev, May 6th, 2024

Cory writes quote poignantly in that article of the promises made by technology in the past. In that excerpt, he also alludes to what I call the ‘Red Dots‘ that keep us distracted, increasingly feral, and rob us of what is really important to us – or should be.

This melds well with the point in a Scientific American opinion I read today, particularly the aspect of AI and social media:

…A Facebook whistleblower made this all clear several years ago. To meet its revenue goals, the platform used AI to keep people on the platform longer. This meant finding the perfect amount of anger-inducing and provocative content, so that bullying, conspiracies, hate speech, disinformation and other harmful communications flourished. Experimenting on users without their knowledge, the company designed addictive features into the technology, despite knowing that this harmed teenage girls. A United Nations report labeled Facebook a “useful instrument” for spreading hate in an attempted genocide in Myanmar, and the company admitted the platform’s role in amplifying violence. Corporations and other interests can thus use AI to learn our psychological weaknesses, invite us to the most insecure version of ourselves and push our buttons to achieve their own desired ends…

AI Doesn’t Threaten Humanity. Its Owners Do” , Joseph Jones, Scientific American, May 6th, 2024.

Again, things I have written about regarding AI, which connects to social media, which connects to social media, which connects to you, gentle reader, your habits, your information, your privacy. In essence, your life.

I’d say we’re on the cusp of something, but it’s the same cusp. We can and should be skeptical of these companies trying to sell us on a future that they promise us but have never really given us.

There’s a flood of AI marketing, silly AI products, and sillier ideas about AI that confound me, like AI chatbots in the publishing industry so you can chat with a bot about a book you read.

There’s silly, and there’s worrisome, like teens becoming addicted to AI chatbots – likely because there’s less risk than dealing with other human beings, the possible feelings of rejection, the anxiety associated with it, and the very human aspect of…

Being human.

I have advocated and will continue to advocate for sensible use of technology, but I’m not seeing as much of it mixed in with the marketing campaigns that suck the air out of the room, filling it with a synthetic ether that has the potential to drown who we are rather than help us maximize who we can be.

We should be treating new technology like underwear we will put next to our nether regions. Carefully. Be picky. And if it’s too good to be true – it likely is. And if the person selling it to you is known for shortcuts, expect uncomfortable drafts around your nether regions.

I’d grown tired of writing about it, and thank those whose articles got mentioned for giving me that second wind.

TikTok, History and Issues.

I’ve been noting a lot of blow-back related to the ‘Coming Soon’ ban of TikTok in the United States, and after writing, ‘Beyond TikTok *Maybe* Being Banned‘, I found myself wondering… why are people so attached to a social network?

I could get into the obvious reasons – the sunk cost fallacy, where so much time and energy is invested in something that one doesn’t want to leave it. We humans tend toward that despite knowing better.

We see this with all social media where one can’t simply move content from one place to another easily, much less the connections made. If you back up your Facebook account, as an example, it’s only your information that gets backed up – not all the interactions with everyone you know, and not content you may be tagged in. So you lose that, but it’s sort of like moving to different geography – you can’t always keep relationships with those tied to your previous geography.

Yet the vehemence of some of the posts in defense of TikTok had me digging deeper. It wasn’t just the sunken cost, there was more to it than that. I haven’t used TikTok, not because of some grand reason – I just didn’t find it appealing.

Thus I explored some things. I’m not really for or against TikTok. I am against social media that passes your information on to entities you may not know, where it will be used in ways that are well beyond your control – even anonymized, it doesn’t mean an individual isn’t identifiable. How many times have you pictured a face and not remembered a name? Still, there’s something really sticky about TikTok.

Here’s what I found.

The Start: The Death of Vine.

When Vine curled up and died as Twitter started allowing video uploads, TikTok stood in. Vine was used by a diverse group of people – the regular stuff, including marketing and branding. There was nothing too different about users of other social media at that point.

It’s appealing to those with short attention spans, and the new average attention span is 47 seconds.

Then Ferguson happened, and Vine ended up becoming a part of an identifiable social movement after Michael Brown was shot and killed, largely because of Antonio French’s (then St Louis City Alderman) posts documenting racial tensions in and around Ferguson.

It connected people who were participating in protests, which happens to an extent in some social media, but it doesn’t seem as much. It makes sense. A short video on a mobile phone drains less of the battery and the context of protests is hard to miss in a short video format – so while documenting things, one is more mobile, can post more information in context, and can be seen by a lot of people. That’s a powerful tool for social commentary and social awareness.

It made such an impact that activists mourned the loss of Vine.

TikTok showed up, with recommendation algorithms.

The Vine Replacement.

TikTok has the regular band of social media users, from dancing to brands – but it filled a vacuum left behind for social awareness and protest. It had other ‘benefits’ – being able to use copyrighted music on that platform but not others allowed lip synching and dance for a new generation of social media users. You can ‘stitch’ other videos – combining someone else’s video for yours, allowing commentary on commentary, like a threaded conversation only with combined contexts1.

There’s a lot of commentary on it’s algorithm for giving people want to see as well.

Certain landmark things happened in the world that highlighted social awareness and activism.

Black Lives Matter

Russian Invasion of Ukraine

Israel-Hamas War.

Plenty of other platforms were used in these, but the younger generations on the planet gravitated to TikTok. It became a platform where they could air their own contexts and promote awareness of things that they care about, though not all of minorities may agree, with one large blind spot.

The Blind Spot

That blind spot accounts for 18.6% of the global population. China. There is no criticism of China on TikTok, it’s removed, and maybe because people are caught up in their own contexts they seem unaware of that, and the state of human rights in China. It’s a platform where you can protest and air dirty laundry except in the country it is headquartered in.

It should be at least a little awkward to use a platform for social activism headquartered in a country that doesn’t permit it, much less the country that sits at 3rd in the list Worst countries for human rights and rule of law as of 2022, below Yemen and Iran.

The Great Firewall of China absolves users of TikTok of ignorance by assuring their ignorance.

And interesting, of the 30+ countries that have banned TikTok, China’s one of them. The localized version, Douyin, is subject to censorship by the Chinese Communist Party.

I’d say that should make everyone a little leery about supporting TikTok.

But What Will Come Next?

The TikTok platform certainly has allowed the younger generations to give voice to their situations and issues. That is not a bad thing.

There’s a few things that are happening – TikTok won’t go away for a while, it will be in court more than likely for some years appealing the ban. If people do care about social awareness and activism, it’s a hard case to make that what’s good for the rest of the world isn’t good for China.

If you truly care about human rights, TikTok is a paradox. It’s hard to have a conscientious conversation about human rights on a platform which doesn’t practice those same rights in it’s own country.

The key to finding an alternative is an algorithm, since the algorithm is fed by tracking users – users who might not be as keen about being tracked when they understand what that means.

Something will come. Something always does.

  1. This has seen some sociological study as you can see in “Stitching the Social World: The Processing of Multiple Social Structures in Communication↩︎

“Free Speech” And Social Media.

I’ve seen plenty of folks talking about ‘First Amendment’ and ‘Freedom of Speech’ in the context of TikTok, as I saw on Facebook, as I saw on…

All the way back to AOL. Strangely, I don’t remember the topic on BBSs (Bulletin Board Systems), mainly because everyone on those generally understood the way things are.

As a moderator on websites in the early days of the Internet right up to WSIS, I heard it again and again. “You can’t restrict my freedom of speech!”

Social media platforms are private companies and are not bound by the First Amendment. In fact, they have their own First Amendment rights. This means they can moderate the content people post on their websites without violating those users’ First Amendment rights. It also means that the government cannot tell social media sites how to moderate content. Many state laws to regulate how social media companies can moderate content have failed on First Amendment grounds.

Most sites also cannot, in most cases, be sued because of users’ posts due to Section 230 of the federal Communications Decency Act.

Free Speech on Social Media: The Complete Guide“, Lata Nott, FreedomForum.

The link for the quote has a great article worth reading, because there are some kinds of speech that you can get in trouble for. No sense rewriting a good article.

So this idea about ‘free speech’ on any platform controlled by anyone other than yourself is incorrect. Wrong.

Once you don’t break the terms of service or laws in the country you’re in or the country where the platform is hosted (legally), you can say whatever you want. The principle of the freedom of speech is assumed by a lot of people because it’s in the interests of platforms to let people say whatever they want as long as it doesn’t impact their ability to do business – irritating other users, threatening them, etc.

Even your own website is subject to the terms and conditions of the host.

There’s a quote falsely attributed to Voltaire that people pass around, too: “To learn who rules over you, simply find out who you are not allowed to criticize.” Powerful words, thoughtful words, unfortunately expressed by someone who is… well, known for the wrong things.

It doesn’t seem to apply that much on social media platforms anyway. I have routinely seen people on Twitter griping about Twitter, on Facebook griping about Facebook… the only social media platform I haven’t seen it on is LinkedIn, but I imagine someone probably did there too.

This idea seems to come up at regular intervals. It could be a generational thing. In a world where we talk about what should be taught in schools, this is one of them.

Government interference in these platforms moderation could be seen as a First Amendment issue. With TikTok, there’s likely going to be a showdown over freedom of speech in that context, but don’t confuse it with the user’s first amendment rights. It’s strange that they might do that, too, because where ByteDance (the owning company) is based, they couldn’t sue their government. China’s not known for freedom of speech. Ask Tibet.

The second you find yourself defending a platform you don’t control, take a breath and ask yourself if you can’t just do the thing somewhere else. You probably should.

The Fediverse isn’t too different, except you can find a server with rules that work for you to connect to it.

Beyond TikTok *Maybe* Being Banned.

The buzz about the possible TikTok ban has been pretty consistent from what I’ve seen in social media, but it seems like most people don’t get why it’s happening.

One post on Mastodon I read said that it was a way for the government to alienate GenZ, and I thought – is this network really such a big deal? Anecdotally, I know quite a few people who peruse TikTok, and I shake my head because I explain why it’s not a great social network to use. In fact, the reasons not to use TikTok are pretty much the same as why people shouldn’t be using Facebook, Instagram, LinkedIn, Twitter X, and whatever else is out there: They want to know your habits, as I wrote.

In that regard, if TikTok is used so exclusively by GenZ, it’s easy to imagine lobbyists from the big social network companies might push for TikTok being banned. That is likely, since all that data on GenZ isn’t in their hands and they believe it should be. But it goes a bit deeper.

U.S. officials fear that the Chinese government is using TikTok to access data from, and spy on, its American users, spreading disinformation and conspiracy theories...

Congress approved a TikTok ban. Why it could still be years before it takes effect.“, Rob Wile and Scott Wong, NBCNews, April 23rd, 2024

That’s fair. We have enough domestic (American) disinformation and conspiracy theories during a 2024 election, we don’t need other governments doing their own to their benefit, as happened in 2016 with Russia.

Interestingly, and perhaps unrelated, the U.S. Senate passed a bill renewing FISA, which makes discussion about a ban of any foreign social media a little awkward.

“It’s important that people understand how sweeping this bill is,” said Sen. Ron Wyden, D-Ore., a member of the Intelligence Committee and outspoken proponent of privacy protections. “Something was inserted at the last minute, which would basically compel somebody like a cable guy to spy for the government. They would force the person to do it and there would be no appeal.”…

Senate passes bill renewing key FISA surveillance power moments after it expires“, Frank Thorp V, Sahil Kapur and Ryan Nobles, NBCNews, April 20th, 2024.

Articles about FISA are very revealing – but people who are focused on the TikTok ban alone are missing some great information. This article by Hessie Jones on Forbes puts together some pretty great quotes. so much so I won’t quote it and point you at it: “Data Privacy And The Contested Extension Of FISA, Section 702” (April 23rd, 2024).

You see, it’s not just about foreign data:

…Under FISA’s Section 702, the government hoovers up massive amounts of internet and cell phone data on foreign targets. Hundreds of thousands of Americans’ information is incidentally collected during that process and then accessed each year without a warrant — down from millions of such queries the US government ran in past years. Critics refer to these queries as “backdoor” searches…

Senate passes, Biden signs surveillance bill despite contentious debate over privacy concerns“, Ted Barrett, Morgan Rimmer and Clare Foran, CNN, April 20th, 2024.

So, what’s feeding generative artificial intelligences? Why, you are, of course, with everyone’s social network ‘allowing’ you to do so.

The TikTok ban will likely be fought in court for years, anyway, and who knows what direction it will take depending on who wins the election?

But social networks and companies will still be hoovering that data up, training artificial intelligences all about you. It will help train algorithms to sell you stuff and influence you to make decisions.

TikTok ain’t the issue.

Why Social Media Moderation Fails

Ukrainian Military Tractor Pulling Moscow Parody
A clear parody of a Ukrainian tractor pulling the Moscow.

Moderation of content has become a bit ridiculous on social media sites of late. Given that this post will show up on Facebook, and the image at top will be shown, it’s quite possible that the Facebook algorithms that have run amok with me over similiar things, clear parody, may further restrict my account. I clearly marked the image as a parody.

Let’s see what happens. I imagine they’ll just toss more restrictions on me, which is why Facebook and I aren’t as close as we once were. Anyone who thinks a tractor pulling the sunk Moskva really happened should probably have their head examined, but this is the issue of such algorithms left unchecked. It quite simply is impossible, implausible, and… yes, funny, because Ukrainian tractors have invariably been the heroes of the conflict, even having been blown up when their owners were simply trying to reap their harvests.

But this is not about that.

This is about understanding how social media moderation works, and doesn’t, and why it does, and doesn’t.

What The Hell Do You Know?

Honestly, not that much. As a user, I’ve steered clear of most problems with social networks simply by knowing it’s not a private place where I can do as I please – and even where I can, I have rules of conduct I live by that are generally compatible with the laws of society.

What I do know is that when I was working on the Alert Retrieval Cache way back when, before Twitter, the problem I saw with this disaster communication software was the potential for bad information. Since I couldn’t work on it by myself because of the infrastructural constraints of Trinidad and Tobago (which still defies them for emergency communications), I started working on the other aspects of it, and the core issue was ‘trusted sources’.

Trusted Sources.

To understand this problem, you go to a mechanic for car problems, you go to a doctor for medical problems, and so on. Your mechanic is a trusted source for your car (you would hope). But what if your mechanic specializes in your car, but your friend has a BMW that spends more time in the shop than on the road? He might be a trusted source.

You don’t see a proctologist when you have a problem with your throat, though maybe some people should. And this is where the General Practitioner comes in to basically give you directions on what specialist you should see. With a flourish of a pen in alien handwriting, you are sent off to a trusted source related to your medical issue – we hope.

In a disaster situation, you have on the ground people you have on the ground. You might be lucky enough to have doctors, nurses, EMTs and people with some experience in dealing with a disaster of whatever variety that’s on the table, and so you have to do the best with what you have. For information, some sources will be better than others. For getting things done, again, it depends a lot on the person on the ground.

So the Alert Retrieval Cache I was working on after it’s instantiation was going to have to deal with these very human issues, and the best way to deal with that is with other humans. We’re kind of good at that, and it’s not something that AI is very good at because AI is built by specialists and beyond job skills, most people are generalists.You don’t have to be a plumber to fix a toilet, and you don’t have to be a doctor to put a bandage on someone. What’s more, people can grow beyond their pasts despite an infatuation in Human Resources with the past.

Nobody hires you to do what you did, they hire you to do what they want to do in the future.

So just in a disaster scenario, trusted sources are fluid. In an open system not confined to disasters, open to all manner of cute animal pictures, wars, protests, and even politicians (the worst of the lot in my opinion), trusted sources is a complete crapshoot. This leads everyone to trust nothing, or some to trust everything.

Generally, if it goes with your cognitive bias, you run with it. We’re all guilty of it to some degree. The phrase, “Trust but verify” is important.

In social media networks, ‘fact checking’ became the greatest thing since giving up one’s citizenship before a public offering. So fact checking happens, and for the most part is good – but, when applied to parody, it fails. Why? Because algorithms don’t have a sense of humor. It’s either a fact, or it’s not. And so when I posted the pictures of Ukrainian tanks towing everything, Facebook had a hissy fit, restricted my account and apparently had a field day going through past things I posted that were also parody. It’s stupid, but that’s their platform and they don’t have to defend themselves to me.

Is it annoying? You bet. Particularly since no one knows how their algorithms work. I sincerely doubt that they do. But this is a part of how they moderate content.

In protest, does it make sense to post even more of the same sort of content? Of course not. That would be shooting one’s self in the foot (as I may be doing now when this posts to Facebook), but if you’ve already lost your feet, how much does that matter?

Social media sites fail when they don’t explain their policies. But it gets worse.

Piling on Users.

One thing I’ve seen on Twitter that has me shaking my head, as I mentioned in the more human side of Advocacy and Social Networks, is the ‘Pile On’, where a group of people can get onto a thread and overload someone’s ability to respond to one of their posts. On most networks there is some ‘slow down’ mechanism to avoid that happening, and I imagine Twitter is no different, but that might be just from one specific account. Get enough accounts doing the same thing to the same person, it can get overwhelming from the technical side, and if it’s coordinated – maybe everyone has the same sort of avatar as an example – well, that’s a problem because it’s basically a Distributed Denial of Service on another user.

Now, this could be about all manner of stuff, but the algorithms involved don’t care about how passionate people might feel about a subject. They could easily see commonalities in the ‘attack’ on a user’s post, and even on the user. A group could easily be identified as doing pile ons, and their complaints could be ‘demoted’ on the platform, essentially making it an eyeroll and, “Ahh.These people again.”

It has nothing to do with the content. Should it? I would think it should, but then I would want them to agree with my perspective because if they didn’t, I would say it’s unfair. As Lessig wrote, Code is Law. So there could well be algorithms watching that. Are there? I have no earthly idea, but it’ something I could see easily implemented.

And for being someone who does it, if this happens? It could well cause problems for the very users trying to advocate a position. Traffic lights can be a real pain.

Not All In The Group Are Saints.

If we assume that everyone in our group can do no wrong, we’re idiots. As groups grow larger, the likelihood of getting something wrong increases. As groups grow larger, there’s increased delineation from other groups, there’s a mob mentality and there’s no apology to be had because there’s no real structure to many of these collective groups. When Howard Rheingold wrote about Smart Mobs, I waited for him to write about “Absolutely Stupid Mobs”, but I imagine that book would not have sold that well.

Members of groups can break terms of service. Now, we assume that the account is looked at individually. What happens if they can be loosely grouped? We have the technology for that. Known associates, etc, etc. You might be going through your Twitter home page and find someone you know being attacked by a mob of angry clowns – it’s always angry clowns, no matter how they dress – and jump in, passionately supporting someone who may have well caused the entire situation.

Meanwhile, Twitter, Facebook, all of them simply don’t have the number of people to handle what must be a very large flaming bag of complaints on their doorstep every few microseconds. Overwhelmed, they may just go with what the algorithms say and call it a night so that they can go home before the people in the clown cars create traffic.

We don’t know.

We have Terms of Service for guidelines, but we really don’t know the algorithms these social media sites run to check things out. It has to be at least a hybrid system, if not almost completely automated. I know people on Twitter who are on their third accounts. I just unfollowed one today because I didn’t enjoy the microsecond updates on how much fun they were having jerking the chains of some group that I won’t get into. Why is it their third account? They broke the Terms of Service.

What should you not do on a network? Break the Terms of Service.

But when the terms of service are ambiguous, how much do they really know? What constitutes an ‘offensive’ video? An ‘offensive’ image? An ‘offensive’ word? Dave Chappelle could wax poetic about it, I’m sure, as could Ricky Gervais, but they are comedians – people who show us the humor in an ugly world, when permitted.

Yet, if somehow the group gets known to the platform, and enough members break Terms of Service, could they? Would they? Should they?

We don’t know. And people could be shooting themselves in the foot.

It’s Not Our Platform.

As someone who has developed platforms – not the massive social media platforms we have now, but I’ve done a thing or two here and there – I know that behind the scenes things can get hectic. Bad algorithms happen. Good algorithms can have bad consequences. Bad algorithms can have good consequences. Meanwhile, these larger platforms have stock prices to worry about, shareholders to impress, and if they screw up some things, well, shucks, there’s plenty of people on the platform.

People like to talk about freedom of speech a lot, but that’s not really legitimate when you’re on someone else’s website. They can make it as close as they can, following the rules and laws of many nations or those of a few, but really, underneath it all, their algorithms can cause issues for anyone. They don’t have to explain to you why the picture of your stepmother with her middle finger up was offensive, or why a tractor towing a Russian flag ship needed to be fact checked.

In the end, there’s hopefully a person at the end of the algorithm who could be having a bad day, or could just suck at their job, or could even just not like you because of your picture and name. We. Don’t. Know.

So when dealing with these social networks, bear that in mind.