Wikipedia, and It’s Trouble with LLMs.

Wikipedia, a wonderful resource despite all the drama that comes with the accumulation of content, is having some trouble dealing with the the large language model (LLMs) AIs out there. There are two core problems – the input, and the output.

“…The current draft policy notes that anyone unfamiliar with the risks of large language models should avoid using them to create Wikipedia content, because it can open the Wikimedia Foundation up to libel suits and copyright violations—both of which the nonprofit gets protections from but the Wikipedia volunteers do not. These large language models also contain implicit biases, which often result in content skewed against marginalized and underrepresented groups of people

The community is also divided on whether large language models should be allowed to train on Wikipedia content. While open access is a cornerstone of Wikipedia’s design principles, some worry the unrestricted scraping of internet data allows AI companies like OpenAI to exploit the open web to create closed commercial datasets for their models. This is especially a problem if the Wikipedia content itself is AI-generated, creating a feedback loop of potentially biased information, if left unchecked…” 

AI Is Tearing Wikipedia Apart“, Claire Woodcock, Vice.com, May 2nd, 2023.

The Input into Wikipedia.

Inheriting the legal troubles of companies that built AI models by taking shortcuts seems like a pretty stupid thing to do, but there are companies and individuals doing it. Fortunately, the Wikimedia Foundation is a bit more responsible, and is more sensitive to biases.

Using a LLM to generate content for Wikipedia is simply a bad idea. There are some tools out there (I wrote about Perplexity.ai recently) that do the legwork for citations, but with Wikipedia, not all citations are necessarily on the Internet. Some are in books, those dusty tomes where we have passed down knowledge over the centuries, and so it takes humans to be able to not just find those citations, but assess them and assure that other citations of other perspectives are involved1.

As they mention in the article, first drafts are not a bad idea, but they’re also not a great idea. If you’re not vested enough in a topic to do the actual reading, should you really be editing a community encyclopedia? I don’t think so. Research is an important part of any accumulation of knowledge, and LLMs aren’t even good shortcuts, probably because the companies behind them took shortcuts.

The Output of Wikipedia.

I’m a little shocked that Wikipedia might not have been scraped by the companies that own LLMs, considering just how much they scraped and from whom. Wikipedia, to me, would have been one of the first things to scrape to build the learning model, as would have been Project Gutenberg. Now that they’ve had the leash yanked, maybe they’re asking for permission now, but it seems peculiar that they would not have scraped that content in the first place.

Yet, unlike companies that simply cash in on the work of volunteers, like Huffington Post, StackOverflow, and so on, Wikimedia has a higher calling – and cashing in on volunteer works would likely cause less volunteers. Any sort of volunteer does so for their own reasons, but in an organization they collectively work toward something. The Creative Commons Licensing Wikipedia has requires attribution, and LLMs don’t attribute anything. I can’t even get ChatGPT to tell me how many books it’s ‘read’.

What makes this simple is that if all the volunteer work from Wikipedia is shoved into the intake manifold of a LLM, and that LLM is subscription based, and volunteers would have to pay to use it, it’s a non-starter.

We All Like The Idea of an AI.

Generally speaking, the idea of an AI being useful for so many things is seductive, from Star Trek to Star Wars. I wouldn’t mind an Astromech droid, but where science fiction meets reality, we are stuck with the informational economy and infrastructure we have inherited over the centuries. Certainly, it needs to be adapted, but there are practical things that need to be considered outside of the bubbles that a few billionaires seem to live in.

Taking the works of volunteers and works from the public domain2 to turn around and sell them sounds Disney in nature, yet Mickey Mouse’s fingerprints on the Copyright Act have helped push back legally on the claims of copyright. Somewhere, there is a very confused mouse.

  1. Honestly, I’d love a job like that, buried in books. ↩︎
  2. Disney started off by taking public domain works and copyrighting their renditions of them, which was fine, but then they made sure no one else could do it – thus the ‘fingerprints’. ↩︎

Smart Watch? Nope.

I don’t do tech reviews, normally, because I don’t think anyone can review any technological device as soon as it comes out. Sure, you can see how fast it is with benchmarks, you can oooh and ahhh over screen resolutions and all the pretty colors, but really, you don’t know how good a device is until you’ve had it a while.

My new technology fetish went away a few decades ago. To paraphrase Douglas Adams, I don’t want technology, I want stuff that works.

A few years ago, I got one of these ‘smart watches’. I didn’t really buy it, but I did, because it came as a special with the phone I purchased at the time. In fact, I wouldn’t have gotten one otherwise, because – well, what’s the point of having something you don’t need? For some people there is a point to that, a point I do not understand or need to, but I’m probably lazier than them. The things I need demand enough time of me, and I need time to not be doing things for the things that I need.

Henry David Thoreau was on to something.

This watch was shiny and new. It was packaged prettily. It even came charged, and so I dutifully became familiar with it and got it to do some stuff – like tell the time, monitor my heart rate and sleeping, and connect to my calendar. It did these things dutifully, but it would require charging just about every day.

That’s annoying. My first watch was a Mickey Mouse watch, given to me at age 9 6 by my parents so that (1) I could learn to tell time by the hands, (2) I would know what time it was and stop asking them. It was incredible for about a week. I would have to wind it up daily, and Mickey dutifully pointed at the hour with his small hand and the minutes with his longer hand. It was then I realized that Mickey Mouse had arms that were not uniform. This bothered me, so I took off the watch and simply looked at the clock on the wall.

They’re selling smart watches now that show the time digitally or traditionally, and they’re all very sleek, but… I don’t think they’re worth the effort. I have lots of devices that tell me what time it is. The heart rate and health stuff was interesting to monitor for a while, but that’s gotten monotonous. And when I look at the watch, greedily sucking at the nipple of it’s wireless charger, I wonder what the point of it is.

It has not improved my life. The feature for talking to people on it like it’s a phone – the whole Dick Tracy thing – is annoying, and having tried it, anyone who does it in public looks like an idiot. I’m sorry if that’s you, but yes, shouting at your wrist and sticking your ear next to it to hear what’s being said makes you look like an idiot. Notice, I didn’t call you an idiot. It just makes you look like one.

Anywhere my watch went, my phone went. Much more usable. Much longer battery life. Much more useful. The whole ‘smart watch’ thing seems like a novelty to me.

I sort of knew it when I got it because I didn’t really want it. Now it will go into a drawer of junk, leaving my wrist free and unencumbered when I write on a laptop, without it scratching the laptop case. Yes, my laptop has a scar.

Wearable technology is a cool idea until you wear it a while. Now they’re gonna put ‘AI’ on them to make them ‘smarter’ and again, not that big of a deal.

Of course, if you really want one, go out and get one, but really – what do you need it to do?

When The Internet Eats Itself

The recent news of Stack Overflow selling it’s content to OpenAI was something I expected. It was a matter of time. Users of Stack Overflow were surprised, which I am surprised by, and upset, which I’m not surprised by.

That seems to me a reasonable response. Who wouldn’t? Yet when we contribute to websites for free on the Internet and it’s not our website, it’s always a terrible bargain. You give of yourself for whatever reason – fame, prestige, or just sincerely enjoying helping, and it gets traded into cash by someone else.

But companies don’t want you to get wise. They want you to give them your content for free so that they can tie a bow around it and sell it. You might get a nice “Thank you!” email, or little awards of no value.

No Good Code Goes Unpunished.

The fallout has been disappointing. People have tried logging in and sabotaging their top answers. I spoke to one guy on Mastodon a few days ago and he got banned. It seems pretty obvious to me that they had already backed up the database where all the stuff was, and that they would be keeping an eye on stuff. Software developers should know that. There was also some confusion about the Creative Commons licensing the site uses versus the rights given to the owners of the website, which are mutually exclusive.

Is it slimy? You bet. It’s not new, and the companies training generative AI have been pretty slimy. The problem isn’t generative AI, it’s the way the companies decide to do business by eroding trust with the very market for their product while poisoning wells that they can no longer drink from. If you’re contributing answers for free that will be used to train AI to give the same answers for a subscription, you’re a silly person1.

These days, generative AI companies need to put filters on the front of their learning models to keep small children from getting sucked in.

Remember Huffington Post?

Huffington Post had this neat little algorithm for swapping around headlines til it found one that people liked, it gamed SEO, and it built itself into a powerhouse that almost no one remembers now. It was social, it was quirky, and it was fun. Volunteers put up lots of great content.

When Huffingpost sold for $315 million, the volunteers who provided the content for free and built the site up before it sold sued – and got nothing. Why? Because they had volunteered their work.

I knew a professional journalist who was building up her portfolio and added some real value – I met her at a conference in Chicago probably a few months before the sale, and I asked her why she was contributing to HuffPost for free. She said it was a good outlet to get some things out – and she was right. When it sold, she was angry. She felt betrayed, and rightfully so I think.

It seems people weren’t paying attention to that. I did2.

You live, you learn, and you don’t do it again. With firsthand and second hand experience, if I write on a website and I don’t get paid, it’s my website. Don’t trust anyone who says, “Contribute and good things will happen!”. Yeah, they might, but it’s unlikely it will happen for you.

If your content is good enough for a popular site, it’s good enough to get paid to be there. You in the LinkedIn section – pay attention.

Back To AI’s Intake Manifold.

I’ve written about companies with generative AI models scraping around looking for content, with contributed works to websites being a part of the training models. It’s their oil, it’s what keeps them burning through cash as they try to… replace the people whose content they use. In return, the Internet gets slop generated all over, and you’ll know the slop when you read it – it lacks soul and human connection, though it fakes it from time to time like the pornographic videos that make the inexperienced think that’s what sex is really like. Nope.

The question we should be asking is whether it’s worth putting anything on the Internet at this point, just to have it folded into a statistical algorithm that chews up our work and spits out something like it. Sure, there are copyright lawsuits happening. The argument of transformative works doesn’t really work that well in a sane mind when it comes to the exponentially higher amount of content used to create a generative AI at this point.

So what happens when less people contribute their own work? One thing is certain: the social aspect of the Internet will not thrive as well.

Social.

The Stack Overflow website was mainly an annoyance for me over the years, but I understand that many people had a thriving society of a sort there. It was largely a meritocracy, as open source, at least at it’s core. You’ll note that I’m writing of it in the past tense – I don’t think anyone with any bit of self-worth will contribute there anymore.

The annoyance aspect for me came from (1) Not finding solutions to the quirky problems that people hired me to solve3, and (2) Finding code fragments I tracked down to Stack Overflow poorly (if at all) adapted to the employer or client needs. I also had learned not to give away valuable things for free, so I didn’t get involved. Most, if not all, of the work I did required my silence on how things worked, and if you get on a site like StackOverflow – your keyboard might just get you in trouble. Yet the problem wasn’t the site itself, but those who borrowed code like it was a cup of sugar instead of a recipe.

Beyond we software engineers, developers, whatever they call themselves these days, there are a lot of websites with social interaction that are likely getting their content shoved into an AI learning model at some point. LinkedIn, owned by Microsoft, annoyingly in the top search results, is ripe for being used that way.

LinkedIn doesn’t pay for content, yet if you manage to get popular, you can make money off of sponsored posts. “Hey, say something nice about our company, here’s $x”. That’s not really social, but it’s how ‘influencers’ make money these days: sponsored posts. When you get paid to write posts in that way, you might be selling your soul unless you keep a good moral compass, but when bills need to get paid, that moral compass sometimes goes out the window. I won’t say everyone is like that, I will say it’s a danger and why I don’t care much about ‘influencers’.

In my mind, anyone who is an influencer is trying to sell me something, or has an ego so large that Zaphod Beeblebrox would be insanely jealous.

Regardless, to get popular, you have to contribute content. Who owns LinkedIn? Microsoft. Who is Microsoft partnered with? OpenAI. The dots are there. Maybe they’re not connected. Maybe they are.

Other websites are out there that are building on user content. The odds are good that they have more money for lawyers than you do, that their content licensing and user agreement work for them and not you, and if someone wants to buy that content for any reason… you’ll find out what users on Stack Overflow found out.

All relationships are built on trust. All networks are built on trust. The Internet is built on trust.

The Internet is eating itself.

  1. I am being kind. ↩︎
  2. I volunteered some stuff to WorldChanging.com way back when with the understanding it would be Creative Commons licensed. I went back and forth with Alex and Jamais, as did a few other contributors, and because of that and some nastiness related to the Alert Retrieval Cache, I walked away from the site to find out from an editor that contacted me about their book that they wanted to use some of my work. Nope. I don’t trust futurists, and maybe you shouldn’t either. ↩︎
  3. I always seemed to be the software engineer that could make sense out of gobblygook code, rein it in, take it to water and convince it to drink. ↩︎

Study Claims Human Writers and Artists Pollute More Than AI.

The second I came across the study, “The carbon emissions of writing and illustrating are lower for AI than for humans“, I knew that there had to be flaws in the study.

The premise of the study seemed weird from the start: What would be the point of it? Why is it that someone thought to compare the carbon footprints of humans and AI for generating images and text? What burning question was trying to be answered?

Is the argument to be that there should be less humans? The way things are going on the planet, that almost seems plausible – people warring and killing people could say, “We’re reducing the carbon footprint of humanity!”, get some carbon credits for it and feel good about their contributions – except if protests around the world are any indicator, that may not sell well.

The answer is likely that since people have been pointing out that the carbon footprint of generative AI is high, they want to be able to have a rebuttal. But there are some questions.

To calculate the carbon footprint of a person writing, we consider the per capita emissions of individuals in different countries. For instance, the emission footprint of a US resident is approximately 15 metric tons CO2e per year22, which translates to roughly 1.7 kg CO2e per hour. Assuming that a person’s emissions while writing are consistent with their overall annual impact, we estimate that the carbon footprint for a US resident producing a page of text (250 words) is approximately 1400 g CO2e. In contrast, a resident of India has an annual impact of 1.9 metric tons22, equating to around 180 g CO2e per page. In this analysis, we use the US and India as examples of countries with the highest and lowest per capita impact among large countries (over 300 M population).

The carbon emissions of writing and illustrating are lower for AI than for humans“, Bill Tomlinson, Rebecca W. Black, Donald J. Patterson & Andrew W. Torrance, Scientific Reports, 14 Feb 2024

What they don’t take into account – to the detriment of we lowly human writers – is that the physical act of writing so many words an hour is not all of writing. In fact, all of writing – real writing – requires the lifetime of sensory inputs as well as thought up to that point. Words don’t just fall out of humans.

This point is important because it’s also true of generative AI. Generative AI is certainly trained on large datasets, but those datasets have come from… where? They therefore inherit the human writer carbon footprint, which would be higher since they have stolen used materials that humans created to feed the training model. Further, every human involved in that process, as well as the maintenance of the system, adds to the carbon footprint. Then there are the materials in the GPUs, the integration, etc.

NVIDIA even has a page on the materials that go into GPUs.

So sure, maybe in generating a few thousand words – we presently call that ‘slop’ – it can do someone’s homework or help one write a monotonous study (they did use ChatGPT3), that carbon footprint might seem to be lower, but overall I’d say that it was actually higher than the average human overall.

Because we humans, in having our average carbon footprint, do other things that raise it: we drive to work, we use electricity to power devices pitched to us to increase our productivity, we cook meals, etc. All of that – all of that – is being added into the mix as if it has no value.

Before generative AI came around, nobody pointed at writers and said, “Those people just have this carbon footprint and they don’t do anything. We should create a generative AI that does it.”. In fact, nobody actually asked for any of that. Then, to have work written by writers sucked into a learning model to be used to generate text to create more slop – of questionable quality, of dubious value, being generated to spam the Internet with – and I apologize to real Spam – less nutritional value and taste.

AI art is much the same, I imagine, but I can’t really draw to save my life and have had the good fortune not to have to. I wrote something about using AI art in blogs that explains my usage, but I would never tell my visual art friends that AI has a lower carbon footprint.

The whole study seems funded by some company that wants a rebuttal to carbon footprints. It is, at best, very limited in how it views the carbon footprints of both we lowly humans and our esteemed ‘colleagues’, generative AI. At worst, it’s meant to prop up propaganda marketing for AI and the people who make the point that on top of the human carbon footprint, generative AI adds significantly more.

Unless, of course, this is a study to demonstrate that we need fewer people and we should do something about it – which some governments are doing right now, unfortunately.

Learning More About Sperm Whales: A ‘Phonetic Alphabet’

A whale with an overlay of a representation of audio
A whale with an overlay of a representation of audio

In a time when we’re being inundated with all manner of ‘AI’ to ‘make us more productive’ (not the same as ‘work less’), it’s pretty nice to see AI being used to unravel the mysteries of the world around us.

Machine learning, a branch of AI, has been used in the discovery of what could be described as ‘phonetic alphabet’ the building blocks of a more complex form of communication.

Scientists have been trying for decades to understand how sperm whales communicate. The researchers, part of the Project CETI (Cetacean Translation Initiative) machine learning team, created a giant underwater recording studio with microphones at different depths to examine calls made by about 60 whales, which were tagged to ascertain if they were diving, sleeping or breathing at the surface while clicking.
Having analysed more than 8,700 snippets of sperm whale clicks, known as codas, the researchers claim to have found four basic components making up a “phonetic alphabet”.

Scientists discover sperm whale ‘phonetic alphabet’, AlJazeera, 8 May 2024.

The actual source paper for the article comes from Nature Communications: Contextual and combinatorial structure in sperm whale vocalisations (7 May 2024), which even links to the source data on Github.

In an election year, it does seem much more attractive to listen to whales than politicians.

The Project Ceti website is worth perusing, and there are ways to get involved, from donation of funds to donation of code and other things.

In the middle of all the AI hype, it’s good to see something that’s about exploring and increasing our understanding of the world we’re in, and the creatures around us that could probably teach us a few tricks if we knew how to listen.

Further reading on the general topic: How to Use AI to Talk to Whales—and Save Life on Earth

TikTok: China Struck Back.

ByteDance, the owners of TikTok, are of course, suing the U.S. government over the potential ban of TikTok – but there was something at the very bottom of an Al Jazeera article.

For its part, China has taken similar actions against US-based companies like Meta, whose WhatsApp and Threads platforms were recently ordered to be removed from Chinese-based app stores over questions of national security.

TikTok owner ByteDance files lawsuit against US law forcing app’s sale“, AlJazeera, 7 May 20241

That’s a pretty important point. Without ties to ByteDance, there would be no reason to ratchet things up – and the ratchet is largely symbolic given the Great Firewall of China.

The 1st Amendment issue the U.S. Government is being taken to task for – and with fair reason – allows rights for TikTok itself to have the 1st Amendment Right. The 1st Amendment Rights of U.S. users serve to mist the issue, but these things are separate. ByteDance is defending it’s right, not the right of users.

That China doesn’t have an equivalent of the First Amendment seems to be constantly absent in the media coverage of this, as people are concerned about their First Amendment rights… even as they themselves don’t understand the algorithms, or the cost that they incur on a centralized platform they have no control over. Like any platform these days, aside from the Fediverse.

Better informed users might make better informed decisions. The Fediverse awaits.

  1. Al Jazeera just gave an example of being even-handed by adding that, on the heels of being banned in Israel. ↩︎

AI: Technology, Skepticism, and Your Underwear.

Here are two images depicting futuristic underwear powered by AI technology. The designs are sleek and modern, featuring smart fibers and sensors, with a minimalist setting to emphasize the advanced technology.
Here are two images depicting futuristic underwear powered by AI technology. The designs are sleek and modern, featuring smart fibers and sensors, with a minimalist setting to emphasize the advanced technology.

There’s a balance between technology and humanity that at least some of us think is out of desirable limits now. In fact, it’s been out of limits for some time, and to illustrate the fact I had DALL-E generate some images of AI powered underwear – because if technology were resting close to one’s nether regions, it might be something one would be more careful about – from privacy to quality control.

“…Those social media companies, again, offered convenience and an — as well know — to good to be true promise of free and open access. We closed our blogs, got in line and ceded control of our social graphs. Drawbridges were rolled up, ads increased and nobody left — at least not in droves. Everyone is there so everyone stays.

Journalists and newspapers were drawn in, promised an audience and were gifted with capricious intermediaries that destroyed the profession and industry.

We lost our handle on what is and was true, stopped having conversations and started yelling at their representations. It became easier to shout at someone on line than it was to have a healthier discourse..”

“The tech industry doesn’t deserve optimism it has earned skepticism”, Cory Dransfeldt, CoryD.Dev, May 6th, 2024

Cory writes quote poignantly in that article of the promises made by technology in the past. In that excerpt, he also alludes to what I call the ‘Red Dots‘ that keep us distracted, increasingly feral, and rob us of what is really important to us – or should be.

This melds well with the point in a Scientific American opinion I read today, particularly the aspect of AI and social media:

…A Facebook whistleblower made this all clear several years ago. To meet its revenue goals, the platform used AI to keep people on the platform longer. This meant finding the perfect amount of anger-inducing and provocative content, so that bullying, conspiracies, hate speech, disinformation and other harmful communications flourished. Experimenting on users without their knowledge, the company designed addictive features into the technology, despite knowing that this harmed teenage girls. A United Nations report labeled Facebook a “useful instrument” for spreading hate in an attempted genocide in Myanmar, and the company admitted the platform’s role in amplifying violence. Corporations and other interests can thus use AI to learn our psychological weaknesses, invite us to the most insecure version of ourselves and push our buttons to achieve their own desired ends…

AI Doesn’t Threaten Humanity. Its Owners Do” , Joseph Jones, Scientific American, May 6th, 2024.

Again, things I have written about regarding AI, which connects to social media, which connects to social media, which connects to you, gentle reader, your habits, your information, your privacy. In essence, your life.

I’d say we’re on the cusp of something, but it’s the same cusp. We can and should be skeptical of these companies trying to sell us on a future that they promise us but have never really given us.

There’s a flood of AI marketing, silly AI products, and sillier ideas about AI that confound me, like AI chatbots in the publishing industry so you can chat with a bot about a book you read.

There’s silly, and there’s worrisome, like teens becoming addicted to AI chatbots – likely because there’s less risk than dealing with other human beings, the possible feelings of rejection, the anxiety associated with it, and the very human aspect of…

Being human.

I have advocated and will continue to advocate for sensible use of technology, but I’m not seeing as much of it mixed in with the marketing campaigns that suck the air out of the room, filling it with a synthetic ether that has the potential to drown who we are rather than help us maximize who we can be.

We should be treating new technology like underwear we will put next to our nether regions. Carefully. Be picky. And if it’s too good to be true – it likely is. And if the person selling it to you is known for shortcuts, expect uncomfortable drafts around your nether regions.

I’d grown tired of writing about it, and thank those whose articles got mentioned for giving me that second wind.

Some Things Are Not Technology Issues.

Some years ago, I served on a Board for a residential community – something I haven’t put on my CV and don’t intend to – and everything was falling apart, largely because the lessor wasn’t doing their fair share, which is another story altogether.

While I was on the Board, I took interest in the office because information, which we didn’t have much of because of the lessor, needed to be stored. The phone the property manager used belonged to the old Chairman, the administrator (when we had one) didn’t have a phone, and no information of use was stored in the office – yet it was central to communicating with residents and shareholders. They were using Outlook, and subscribing to a service that didn’t allow them to email beyond a quota which is just… well, Microsoft being Microsoft.

So I created a Google Group for the Board, and wrestled people onto it after I volunteered to do it enough times that I just got sick of it. Residents weren’t getting emails, and it was obvious to even the dullest nail in the box that the problem was the Microsoft quota. The general response, it seems, is to just pay more for Outlook, but I suggested using a Google Group because that way we could split communications between residents and shareholders as needed, we could allow people to access old conversations easily and refer them back to things, and we could build a knowledge base based on these things. It was not rocket science. It was very late 1990s technology I was talking about: Send one email to the group, Google delivers it to everyone. Presto magico.

Being a volunteer Board, you never know who you’re going to get on it. I pressed on those things and then Covid-19 happened, and so nothing really happened. We did manage to get the administrator a phone and get the property manager his own phone, and frustrated with the way things were going I left the Board.

After leaving, I had an open invitation to assist the Boards that came after with everything, but stayed out of their way.

That was 2020 or so. It’s 2024 now. People are still sometimes not getting email because of the same issue, something I told every single Board about for the last 4 years.

It takes only a few minutes to set up a Google Group. There’s nothing complicated about it. I walked by the office as a local expert was explaining why emails were getting bounced back.

His response I overheard was that they needed to pay more for Outlook.

Sometimes, you can lead a horse to water and can’t make it drink – but there are times when you lead a rat to water and wish to drown it.

This is why I hate dealing with local companies in Trinidad and Tobago, and don’t offer any services here. It’s somehow stuck in time. I have loads of stories like this.

TikTok, History and Issues.

I’ve been noting a lot of blow-back related to the ‘Coming Soon’ ban of TikTok in the United States, and after writing, ‘Beyond TikTok *Maybe* Being Banned‘, I found myself wondering… why are people so attached to a social network?

I could get into the obvious reasons – the sunk cost fallacy, where so much time and energy is invested in something that one doesn’t want to leave it. We humans tend toward that despite knowing better.

We see this with all social media where one can’t simply move content from one place to another easily, much less the connections made. If you back up your Facebook account, as an example, it’s only your information that gets backed up – not all the interactions with everyone you know, and not content you may be tagged in. So you lose that, but it’s sort of like moving to different geography – you can’t always keep relationships with those tied to your previous geography.

Yet the vehemence of some of the posts in defense of TikTok had me digging deeper. It wasn’t just the sunken cost, there was more to it than that. I haven’t used TikTok, not because of some grand reason – I just didn’t find it appealing.

Thus I explored some things. I’m not really for or against TikTok. I am against social media that passes your information on to entities you may not know, where it will be used in ways that are well beyond your control – even anonymized, it doesn’t mean an individual isn’t identifiable. How many times have you pictured a face and not remembered a name? Still, there’s something really sticky about TikTok.

Here’s what I found.

The Start: The Death of Vine.

When Vine curled up and died as Twitter started allowing video uploads, TikTok stood in. Vine was used by a diverse group of people – the regular stuff, including marketing and branding. There was nothing too different about users of other social media at that point.

It’s appealing to those with short attention spans, and the new average attention span is 47 seconds.

Then Ferguson happened, and Vine ended up becoming a part of an identifiable social movement after Michael Brown was shot and killed, largely because of Antonio French’s (then St Louis City Alderman) posts documenting racial tensions in and around Ferguson.

It connected people who were participating in protests, which happens to an extent in some social media, but it doesn’t seem as much. It makes sense. A short video on a mobile phone drains less of the battery and the context of protests is hard to miss in a short video format – so while documenting things, one is more mobile, can post more information in context, and can be seen by a lot of people. That’s a powerful tool for social commentary and social awareness.

It made such an impact that activists mourned the loss of Vine.

TikTok showed up, with recommendation algorithms.

The Vine Replacement.

TikTok has the regular band of social media users, from dancing to brands – but it filled a vacuum left behind for social awareness and protest. It had other ‘benefits’ – being able to use copyrighted music on that platform but not others allowed lip synching and dance for a new generation of social media users. You can ‘stitch’ other videos – combining someone else’s video for yours, allowing commentary on commentary, like a threaded conversation only with combined contexts1.

There’s a lot of commentary on it’s algorithm for giving people want to see as well.

Certain landmark things happened in the world that highlighted social awareness and activism.

Black Lives Matter

Russian Invasion of Ukraine

Israel-Hamas War.

Plenty of other platforms were used in these, but the younger generations on the planet gravitated to TikTok. It became a platform where they could air their own contexts and promote awareness of things that they care about, though not all of minorities may agree, with one large blind spot.

The Blind Spot

That blind spot accounts for 18.6% of the global population. China. There is no criticism of China on TikTok, it’s removed, and maybe because people are caught up in their own contexts they seem unaware of that, and the state of human rights in China. It’s a platform where you can protest and air dirty laundry except in the country it is headquartered in.

It should be at least a little awkward to use a platform for social activism headquartered in a country that doesn’t permit it, much less the country that sits at 3rd in the list Worst countries for human rights and rule of law as of 2022, below Yemen and Iran.

The Great Firewall of China absolves users of TikTok of ignorance by assuring their ignorance.

And interesting, of the 30+ countries that have banned TikTok, China’s one of them. The localized version, Douyin, is subject to censorship by the Chinese Communist Party.

I’d say that should make everyone a little leery about supporting TikTok.

But What Will Come Next?

The TikTok platform certainly has allowed the younger generations to give voice to their situations and issues. That is not a bad thing.

There’s a few things that are happening – TikTok won’t go away for a while, it will be in court more than likely for some years appealing the ban. If people do care about social awareness and activism, it’s a hard case to make that what’s good for the rest of the world isn’t good for China.

If you truly care about human rights, TikTok is a paradox. It’s hard to have a conscientious conversation about human rights on a platform which doesn’t practice those same rights in it’s own country.

The key to finding an alternative is an algorithm, since the algorithm is fed by tracking users – users who might not be as keen about being tracked when they understand what that means.

Something will come. Something always does.

  1. This has seen some sociological study as you can see in “Stitching the Social World: The Processing of Multiple Social Structures in Communication↩︎

The Dark Side of the AI.

It didn’t take as long as we expected. Last week, a former school athletic director got arrested for framing a principal.

Being a campaign year, I thought that most of the AI hijinx would revolve around elections around the world – and they are happening – but I didn’t think we’d see early adoption of AI in this sort of thing. And an athletic director, no less – not a title typically known for mastery of technology.

AI has a dark side, which a few of us have been writing. The Servitor does a good job of documenting what they coined as Dark ChatGPT, well worth a look. Any technology can be twisted to our own devices.

It’s not the technology.

It’s us.

Again.

Maybe the CEO of Google was right about a need for more lawyers.