Beyond A Widowed Voice.

By now, the news that Scarlett Johansson’s issues with OpenAI and the voice that sounds like her have made the rounds. She’s well known and regardless of one’s interests, she’s likely to pop up in various contexts. However, she’s not the first.

While different in some ways, voice actors Paul Skye Lehrman and Linnea Sage are suing Lovo for similar reasons. They got hired to do some work that they thought were one off voice overs, then heard their voices saying things they had never said. To the point, they heard their voices doing something that they didn’t get paid for.

The way they found out was oddly poetic.

Last summer, as they drove to a doctor’s appointment near their home in Manhattan, Paul Skye Lehrman and Linnea Sage listened to a podcast about the rise of artificial intelligence and the threat it posed to the livelihoods of writers, actors and other entertainment professionals.

The topic was particularly important to the young married couple. They made their living as voice actors, and A.I. technologies were beginning to generate voices that sounded like the real thing.

But the podcast had an unexpected twist. To underline the threat from A.I., the host conducted a lengthy interview with a talking chatbot named Poe. It sounded just like Mr. Lehrman.

“He was interviewing my voice about the dangers of A.I. and the harms it might have on the entertainment industry,” Mr. Lehrman said. “We pulled the car over and sat there in absolute disbelief, trying to figure out what just happened and what we should do.”

What Do You Do When A.I. Takes Your Voice?, Cade Metz, New York Times, May 16th, 2024.

They aren’t sex symbols like Scarlett Johansson. They weren’t the highest paid actresses in 2018 and 2019. They aren’t seen in major films. Their problem is just as real, just as audible, but not quite as visible. Forbes covered the problems voice actors faced in October of 2023.

…Clark, who has voiced more than 100 video game characters and dozens of commercials, said she interpreted the video as a joke, but was concerned her client might see it and think she had participated in it — which could be a violation of her contract, she said.

“Not only can this get us into a lot of trouble if people think we said [these things], but it’s also, frankly, very violating to hear yourself speak when it isn’t really you,” she wrote in an email to ElevenLabs that was reviewed by Forbes. She asked the startup to take down the uploaded audio clip and prevent future cloning of her voice, but the company said it hadn’t determined that the clip was made with its technology. It said it would only take immediate action if the clip was “hate speech or defamatory,” and stated it wasn’t responsible for any violation of copyright. The company never followed up or took any action.

“It sucks that we have no personal ownership of our voices. All we can do is kind of wag our finger at the situation,” Clark told Forbes

Keep Your Paws Off My Voice’: Voice Actors Worry Generative AI Will Steal Their Livelihoods, Rashi Shrivastava, Forbes.com, October 9th, 2023.

As you can see – the whole issue is not new. It just became more famous because of a more famous face, and involves OpenAI, a company that has more questions about their training data than ChatGPT can answer, so the story has sung from rooftops.

Meanwhile, some are trying to license the voices of dead actors.

Sony recently warned AI companies about unauthorized use of the content they own, but when one’s content is necessarily public, how do you do that?

How much of what you post, from writing to pictures to voices in podcasts and family videos, can you control? It costs nothing, but it costs futures of individuals. And when it comes to training models, these AI companies are eroding the very trust they need from those that they want to sell their product to – unless they’re just enabling talentless and incapable hacks to take over jobs that talented and capable people have already do.

We have more questions than answers, and the trust erodes as more and more people are impacted.

When The Internet Eats Itself

The recent news of Stack Overflow selling it’s content to OpenAI was something I expected. It was a matter of time. Users of Stack Overflow were surprised, which I am surprised by, and upset, which I’m not surprised by.

That seems to me a reasonable response. Who wouldn’t? Yet when we contribute to websites for free on the Internet and it’s not our website, it’s always a terrible bargain. You give of yourself for whatever reason – fame, prestige, or just sincerely enjoying helping, and it gets traded into cash by someone else.

But companies don’t want you to get wise. They want you to give them your content for free so that they can tie a bow around it and sell it. You might get a nice “Thank you!” email, or little awards of no value.

No Good Code Goes Unpunished.

The fallout has been disappointing. People have tried logging in and sabotaging their top answers. I spoke to one guy on Mastodon a few days ago and he got banned. It seems pretty obvious to me that they had already backed up the database where all the stuff was, and that they would be keeping an eye on stuff. Software developers should know that. There was also some confusion about the Creative Commons licensing the site uses versus the rights given to the owners of the website, which are mutually exclusive.

Is it slimy? You bet. It’s not new, and the companies training generative AI have been pretty slimy. The problem isn’t generative AI, it’s the way the companies decide to do business by eroding trust with the very market for their product while poisoning wells that they can no longer drink from. If you’re contributing answers for free that will be used to train AI to give the same answers for a subscription, you’re a silly person1.

These days, generative AI companies need to put filters on the front of their learning models to keep small children from getting sucked in.

Remember Huffington Post?

Huffington Post had this neat little algorithm for swapping around headlines til it found one that people liked, it gamed SEO, and it built itself into a powerhouse that almost no one remembers now. It was social, it was quirky, and it was fun. Volunteers put up lots of great content.

When Huffingpost sold for $315 million, the volunteers who provided the content for free and built the site up before it sold sued – and got nothing. Why? Because they had volunteered their work.

I knew a professional journalist who was building up her portfolio and added some real value – I met her at a conference in Chicago probably a few months before the sale, and I asked her why she was contributing to HuffPost for free. She said it was a good outlet to get some things out – and she was right. When it sold, she was angry. She felt betrayed, and rightfully so I think.

It seems people weren’t paying attention to that. I did2.

You live, you learn, and you don’t do it again. With firsthand and second hand experience, if I write on a website and I don’t get paid, it’s my website. Don’t trust anyone who says, “Contribute and good things will happen!”. Yeah, they might, but it’s unlikely it will happen for you.

If your content is good enough for a popular site, it’s good enough to get paid to be there. You in the LinkedIn section – pay attention.

Back To AI’s Intake Manifold.

I’ve written about companies with generative AI models scraping around looking for content, with contributed works to websites being a part of the training models. It’s their oil, it’s what keeps them burning through cash as they try to… replace the people whose content they use. In return, the Internet gets slop generated all over, and you’ll know the slop when you read it – it lacks soul and human connection, though it fakes it from time to time like the pornographic videos that make the inexperienced think that’s what sex is really like. Nope.

The question we should be asking is whether it’s worth putting anything on the Internet at this point, just to have it folded into a statistical algorithm that chews up our work and spits out something like it. Sure, there are copyright lawsuits happening. The argument of transformative works doesn’t really work that well in a sane mind when it comes to the exponentially higher amount of content used to create a generative AI at this point.

So what happens when less people contribute their own work? One thing is certain: the social aspect of the Internet will not thrive as well.

Social.

The Stack Overflow website was mainly an annoyance for me over the years, but I understand that many people had a thriving society of a sort there. It was largely a meritocracy, as open source, at least at it’s core. You’ll note that I’m writing of it in the past tense – I don’t think anyone with any bit of self-worth will contribute there anymore.

The annoyance aspect for me came from (1) Not finding solutions to the quirky problems that people hired me to solve3, and (2) Finding code fragments I tracked down to Stack Overflow poorly (if at all) adapted to the employer or client needs. I also had learned not to give away valuable things for free, so I didn’t get involved. Most, if not all, of the work I did required my silence on how things worked, and if you get on a site like StackOverflow – your keyboard might just get you in trouble. Yet the problem wasn’t the site itself, but those who borrowed code like it was a cup of sugar instead of a recipe.

Beyond we software engineers, developers, whatever they call themselves these days, there are a lot of websites with social interaction that are likely getting their content shoved into an AI learning model at some point. LinkedIn, owned by Microsoft, annoyingly in the top search results, is ripe for being used that way.

LinkedIn doesn’t pay for content, yet if you manage to get popular, you can make money off of sponsored posts. “Hey, say something nice about our company, here’s $x”. That’s not really social, but it’s how ‘influencers’ make money these days: sponsored posts. When you get paid to write posts in that way, you might be selling your soul unless you keep a good moral compass, but when bills need to get paid, that moral compass sometimes goes out the window. I won’t say everyone is like that, I will say it’s a danger and why I don’t care much about ‘influencers’.

In my mind, anyone who is an influencer is trying to sell me something, or has an ego so large that Zaphod Beeblebrox would be insanely jealous.

Regardless, to get popular, you have to contribute content. Who owns LinkedIn? Microsoft. Who is Microsoft partnered with? OpenAI. The dots are there. Maybe they’re not connected. Maybe they are.

Other websites are out there that are building on user content. The odds are good that they have more money for lawyers than you do, that their content licensing and user agreement work for them and not you, and if someone wants to buy that content for any reason… you’ll find out what users on Stack Overflow found out.

All relationships are built on trust. All networks are built on trust. The Internet is built on trust.

The Internet is eating itself.

  1. I am being kind. ↩︎
  2. I volunteered some stuff to WorldChanging.com way back when with the understanding it would be Creative Commons licensed. I went back and forth with Alex and Jamais, as did a few other contributors, and because of that and some nastiness related to the Alert Retrieval Cache, I walked away from the site to find out from an editor that contacted me about their book that they wanted to use some of my work. Nope. I don’t trust futurists, and maybe you shouldn’t either. ↩︎
  3. I always seemed to be the software engineer that could make sense out of gobblygook code, rein it in, take it to water and convince it to drink. ↩︎

AI: Technology, Skepticism, and Your Underwear.

Here are two images depicting futuristic underwear powered by AI technology. The designs are sleek and modern, featuring smart fibers and sensors, with a minimalist setting to emphasize the advanced technology.
Here are two images depicting futuristic underwear powered by AI technology. The designs are sleek and modern, featuring smart fibers and sensors, with a minimalist setting to emphasize the advanced technology.

There’s a balance between technology and humanity that at least some of us think is out of desirable limits now. In fact, it’s been out of limits for some time, and to illustrate the fact I had DALL-E generate some images of AI powered underwear – because if technology were resting close to one’s nether regions, it might be something one would be more careful about – from privacy to quality control.

“…Those social media companies, again, offered convenience and an — as well know — to good to be true promise of free and open access. We closed our blogs, got in line and ceded control of our social graphs. Drawbridges were rolled up, ads increased and nobody left — at least not in droves. Everyone is there so everyone stays.

Journalists and newspapers were drawn in, promised an audience and were gifted with capricious intermediaries that destroyed the profession and industry.

We lost our handle on what is and was true, stopped having conversations and started yelling at their representations. It became easier to shout at someone on line than it was to have a healthier discourse..”

“The tech industry doesn’t deserve optimism it has earned skepticism”, Cory Dransfeldt, CoryD.Dev, May 6th, 2024

Cory writes quote poignantly in that article of the promises made by technology in the past. In that excerpt, he also alludes to what I call the ‘Red Dots‘ that keep us distracted, increasingly feral, and rob us of what is really important to us – or should be.

This melds well with the point in a Scientific American opinion I read today, particularly the aspect of AI and social media:

…A Facebook whistleblower made this all clear several years ago. To meet its revenue goals, the platform used AI to keep people on the platform longer. This meant finding the perfect amount of anger-inducing and provocative content, so that bullying, conspiracies, hate speech, disinformation and other harmful communications flourished. Experimenting on users without their knowledge, the company designed addictive features into the technology, despite knowing that this harmed teenage girls. A United Nations report labeled Facebook a “useful instrument” for spreading hate in an attempted genocide in Myanmar, and the company admitted the platform’s role in amplifying violence. Corporations and other interests can thus use AI to learn our psychological weaknesses, invite us to the most insecure version of ourselves and push our buttons to achieve their own desired ends…

AI Doesn’t Threaten Humanity. Its Owners Do” , Joseph Jones, Scientific American, May 6th, 2024.

Again, things I have written about regarding AI, which connects to social media, which connects to social media, which connects to you, gentle reader, your habits, your information, your privacy. In essence, your life.

I’d say we’re on the cusp of something, but it’s the same cusp. We can and should be skeptical of these companies trying to sell us on a future that they promise us but have never really given us.

There’s a flood of AI marketing, silly AI products, and sillier ideas about AI that confound me, like AI chatbots in the publishing industry so you can chat with a bot about a book you read.

There’s silly, and there’s worrisome, like teens becoming addicted to AI chatbots – likely because there’s less risk than dealing with other human beings, the possible feelings of rejection, the anxiety associated with it, and the very human aspect of…

Being human.

I have advocated and will continue to advocate for sensible use of technology, but I’m not seeing as much of it mixed in with the marketing campaigns that suck the air out of the room, filling it with a synthetic ether that has the potential to drown who we are rather than help us maximize who we can be.

We should be treating new technology like underwear we will put next to our nether regions. Carefully. Be picky. And if it’s too good to be true – it likely is. And if the person selling it to you is known for shortcuts, expect uncomfortable drafts around your nether regions.

I’d grown tired of writing about it, and thank those whose articles got mentioned for giving me that second wind.

Copyright, AI, And, It Doing It Ethically.

It’s no secret that the generative, sequacious artificial intelligences out there have copyright issues. I’ve written about it myself quite a bit.

It’s almost become cliche to mention copyright and AI in the same sentence, with Sam Altman having said that there would be no way to do generative AI without all that material – toward the end of this post, you’ll see that someone proved that wrong.

Copyright Wars pt. 2: AI vs the Public“, by Toni Aittoniemi in January of 2023, is a really good read on the problem as the large AI companies have sucked in content without permission. If an individual did it, the large companies doing it would call it ‘piracy’, but now, it’s… not? That’s crazy.

The timing of me finding Toni on Mastodon was perfect. Yesterday, I found a story on Wired that demonstrates some of what Toni wrote last year, where he posed a potential way to handle the legal dilemmas surrounding creator’s rights – we call it ‘copyright’ because someone was pretty unimaginative and pushed two words together for only one meaning.

In 2023, OpenAI told the UK parliament that it was “impossible” to train leading AI models without using copyrighted materials. It’s a popular stance in the AI world, where OpenAI and other leading players have used materials slurped up online to train the models powering chatbots and image generators, triggering a wave of lawsuits alleging copyright infringement.

Two announcements Wednesday offer evidence that large language models can in fact be trained without the permissionless use of copyrighted materials.

A group of researchers backed by the French government have released what is thought to be the largest AI training dataset composed entirely of text that is in the public domain. And the nonprofit Fairly Trained announced that it has awarded its first certification for a large language model built without copyright infringement, showing that technology like that behind ChatGPT can be built in a different way to the AI industry’s contentious norm.

“There’s no fundamental reason why someone couldn’t train an LLM fairly,” says Ed Newton-Rex, CEO of Fairly Trained. He founded the nonprofit in January 2024 after quitting his executive role at image-generation startup Stability AI because he disagreed with its policy of scraping content without permission….

Here’s Proof You Can Train an AI Model Without Slurping Copyrighted Content“, Kate Knibbs, Wired.com, March 20th, 2024

It struck me yesterday that a lot of us writing and communicating about the copyright issue didn’t address how it could be handled. It’s not that we don’t know that it couldn’t be handled, it’s just that we haven’t addressed it as much as we should. I went to sleep considering that and in the morning found that Toni had done much of the legwork.

What Toni wrote extends on the system:

…Any training database used to create any commercial AI model should be legally bound to contain an identity that can be linked to a real-world person if so required. This should extend to databases already used to train existing AI’s that do not yet have capabilities to report their sources. This works in two ways to better allow us to integrate AI in our legal frameworks: Firstly, we allow the judicial system to work it’s way with handling the human side of the equation instead of concentrating on mere technological tidbits. Secondly, a requirement of openness will guarantee researches to identify and question the providers of these technologies on issues of equality, fairness or bias in the training data. Creation of new judicial experts at this field will certainly be required from the public sector…

“Copyright Wars pt. 2: AI vs the Public”, Toni Aittoniemi, Gimulnautti, January 13th, 2023.

This is sort of like – and it’s my interpretation – a tokenized citation system built into a system. This would expand on what, as an example, Perplexity AI does by allowing style and ideas to have provenance.

This is some great food for thought for the weekend.

Critical Thinking In The Age Of AI.

Critical thinking is the ability to suspend judgement, and to consider evidence, observations and perspectives in order to form a judgement, requiring rational, skeptical and unbiased analysis and evaluation.

It’s can be difficult, particularly being unbiased, rational and skeptical in a world that seems to require responses from us increasingly quickly.

Joe Árvai, a psychologist who has done research on decision making, recently wrote an article about critical thinking and artificial intelligence.

“…my own research as a psychologist who studies how people make decisions leads me to believe that all these risks are overshadowed by an even more corrupting, though largely invisible, threat. That is, AI is mere keystrokes away from making people even less disciplined and skilled when it comes to thoughtful decisions.”

The hidden risk of letting AI decide – losing the skills to choose for ourselves‘”, Joe Árvai, TheConversation, April 12, 2024

It’s a good article, well worth the read, and it’s in the vein of what I have been writing recently about ant mills and social media. Web 2.0 was built on commerce which was built on marketing. Good marketing is about persuasion (a product or service is good for the consumer), bad marketing is about manipulation (where a product or service is not good for the consumer). It’s hard to tell the difference between the two.

Inputs and Outputs.

We don’t know exactly how much of Web 2.0 was shoveled into the engines of generative AI learning models, but we do know that chatbots and generative AI have become considered more persuasive than humans. In fact, ChatGPT 4 is presently considered 82% more persuasive than humans, as I mentioned in my first AI roundup.

This should at least be a little disturbing, particularly when there are already sites telling people how to get GPT4 to create more persuasive content, such as this one, and yet the key difference between persuasion and manipulation is whether it’s good for the consumer of the information or not – a key problem with fake news.

Worse, we have all seen products and services that had brilliant marketing but were not good products or services. If you have a bunch of stuff sitting and collecting dust, you fell victim to marketing, and arguably, manipulation rather than persuasion.

It’s not difficult to see that the marketing of AI itself could be persuasive or manipulative. If you had a tool that could persuade people they need the tool, wouldn’t you use it? Of course you would. Do they need it? Ultimately, that’s up to the consumers, but if they in turn are generating AI content that feeds the learning models in what is known as synthetic data, it creates it’s own problems.

Critical Thought

Before generative AI became mainstream, we saw issues with people sharing fake news stories because they had catchy headlines and fed a confirmation bias. A bit of critical thought applied could have avoided much of that, but it still remained a problem. Web 2.0 to present has always been about getting eyes on content quickly so advertising impressions increased, and some people were more ethical about that than others.

Most people don’t really understand their own biases, but social media companies implicitly do – we tell them with our every click, our every scroll.

This is compounded by the scientific evidence that attention spans are shrinking. On average, based on research, the new average attention span is 47 seconds. That’s not a lot of time to do critical thinking before liking or sharing something.

While there’s no real evidence that there is more or less critical thought that could be found, the diminished average attention span is a solid indicator that on average, people are using less critical thought.

“…Consider how people approach many important decisions today. Humans are well known for being prone to a wide range of biases because we tend to be frugal when it comes to expending mental energy. This frugality leads people to like it when seemingly good or trustworthy decisions are made for them. And we are social animals who tend to value the security and acceptance of their communities more than they might value their own autonomy.

Add AI to the mix and the result is a dangerous feedback loop: The data that AI is mining to fuel its algorithms is made up of people’s biased decisions that also reflect the pressure of conformity instead of the wisdom of critical reasoning. But because people like having decisions made for them, they tend to accept these bad decisions and move on to the next one. In the end, neither we nor AI end up the wiser…”

The hidden risk of letting AI decide – losing the skills to choose for ourselves‘”, Joe Árvai, TheConversation, April 12, 2024

In an age of generative artificial intelligence that is here to stay, it’s paramount that we understand ourselves better as individuals and collectively so that we can make thoughtful decisions.

That 3rd AI: The Scapegoat.

This is an update of “A Tale of Two AIs” because, it ends up, there’s a third: The scapegoat.

This weekend, I was pointed at “‘The Machine Does It Coldly’: Artificial Intelligence Can Already Kill People” on Mastodon, which inspired a conversation and part of this post.

The trouble with headlines like that is that the amorphous and opaque blob of ‘Artificial Intelligence’ is blamed for killing people, removing the responsibility from those that (1) created and trained the artificial intelligence and, (2) used it.

The use case of artificial intelligence in war is nothing new. Ukraine saw the first announced use of artificial intelligence in war, from dealing with misinformation to controlling drones.

So, referring back to ‘A Tale of Two AIs’, we have the artificial intelligence that is marketed, the artificial intelligence that exists, and now the artificial intelligence as a scapegoat.

It’s not just war either. Janet Vertisi’s article highlights scapegoating of Amazon, Meta (Facebook inclusive) and Microsoft.

The story of AI distracts us from these familiar unpleasant scenes…

Don’t Be Fooled: Much “AI” is Just Outsourcing, Redux“, Janet Vertisi, TechPolicy.Press, Apr 4, 2024

Artificial intelligences don’t suddenly become conscious and bomb people. We humans do that, we tell them what they’re supposed to do and they do it – the day of a general artificial intelligence is not yet here.

We have to be careful in asserting our own accountability in the age of artificial intelligence. Our humanity depends on it.

Echo Chambers, Ant Mills and Social Networks.

There’s been an ant mill, sometimes called a death spiral, that had gone viral on social media some time ago. It’s a real thing.

Army ants follow by pheromones, and if they get separated from the main group, they can end up following each other in circles until they either find a path out or die of exhaustion.

It’s mesmerizing to watch. It’s almost like they’re being held in orbit by something like gravity, but that’s not the case. They’re just lost ants, going in circles.

I’ve seen enough things on the Internet come and go to see a sort of commonality.

It’s actually a pretty good metaphor for echo chambers in social media. Social media isn’t just singular echo chambers, the echo chambers are based on attributes.

If you like, as an example, dogs, you can get into an echo chamber of dog memes. If you also happen to like cats – it is possible to like both – you can get into an echo chamber of cat memes. These are pretty benign echo chambers, but when you start seeing the same memes over and over, you can be pretty sure that echo chamber is in a spiral. You lose interest. You leave, finding a different ‘pheromone trail’ to follow or just… taking a hard right when everyone is going left.

With centralized social networks such as Facebook or Twitter X, algorithms feed these echo chambers that connect people. When those echo chambers become stale, the connections with others within the echo chamber remain and before you know it you have the equivalent of an ant mill of humans in a social network. To stay in that echo chamber, critical thinking is ignored and confirmation biases are fed.

This also accelerates when the people who provide content to the echo chambers – the pheromones, if you will – leave. Some might follow them elsewhere, but the inertia keeps many there until… well, until they’re exhausted.

This seems a logical conclusion to the algorithmic display of content, or promoting certain posts all the time in search results.

Do you find yourself using the same apps, doing the same things over and over with diminishing returns of happiness (assuming there is happiness in your echo chamber)? Does it seem like you’ve seen that meme before?

You might be in a spiral.

Get out. Don’t die sniffing someone else’s posterior.

Week One of Mastodon.

I’ve been on Mastodon a week now and thought I should write a little bit about the experience.

There’s not much to write about. It works. There are interesting people to follow, I’m confident that my data isn’t being collected, and my feed is always interesting because someone else’s algorithm isn’t controlling what I see.

It also ends up that when I wrote that when I attempted to use Mastodon it was ‘like trying to shag an unwilling octopus’, it had a lot to do with the people who landed there from my elder networks and didn’t really explain anything – leaving me wondering about which server to join, whether I needed to build my own server, etc.

It’s actually quite easy. It doesn’t really matter which server you’re on – I’m on social.mastodon – because they all connect through the Fediverse, which is to say that they are decentralized.

Relative to other social networks.

That last part is so important to me. When I was active on Facebook, I saw a very large decline over the years of quality content that I wanted to see. This was underlined by the latest discovery that Facebook is spamming users.

Twitter, or if you’re a Musk-bro, ‘X’, is much the same thing. What’s hilarious is that both of those social networks are trying to train their generative AIs and have the worst platforms because of AI and algorithms. Web 2.0 meets AI, chaos ensues.

LinkedIn deserves mention here since so many people use it, but… as far as professional networking, I don’t think it counts as much as building real connections outside of the leering eyes of Microsoft, and being asked to help write articles for them which I’m sure will be used to train their AI just so I can have a cool title. Nope, no thanks. Hit me in the wallet.

Pros and Cons.

I have yet to have a negative experience with anyone on Mastodon. In fact, when you respond to someone’s post for the first time, I get prompted to basically be courteous, and so I expect other people are as well.

I do miss being able to comment on something I retransmit – in Mastodon speak, that’s boosting. I’m not sure why that is, but I’ve found it’s not something I actually need.

The only thing that Mastodon lacks so far are connections with some family and friends who haven’t moved to Mastodon. That’s simply a factor of inertia, much like in the 1990s many people thought ‘The Internet’ was AOL, which Facebook has mimicked pretty well.

In all, I’m finding Mastodon worthwhile, and much less twitchy than the other social networks, largely because I’m not seeing crap I don’t want to see.

If I have a quiet mind to do other things and a social network is in the background, I consider that a win. Mastodon is a win.

Walled Gardens Become Litterbox Prisons.

Whenever a walled garden on the internet appears, it seems a matter of time before groups of people show up trying to make their own mark on things, and in doing so, convert what started off as maybe a good idea into a litterbox.

Of course, they don’t start that way. They generally are at least dressed as good intentions. I’ve had a place in many of them and still retain placeholders in the larger ones.

In general, the main problem with most of Web 2.0 has been that it has circled around advertising models based on impressions. This is the same model that works for spam: If you send a message out to 1 million people and only 0.01% take the bait, that’s 100 people. It’s about volume, and you can read up more about how much volume here.

Interestingly, when one looks at the companies that spam the most, 2 of them are walled garden social networks: Facebook and LinkedIn. You might be surprised to know the U.S. generates the most.

It’s all about impressions, and algorithms that get impressions, be it via email spam or social networks.

It’s no wonder fake news has taken deep root in social networks, which act as echo chambers that users think are about their interests, but really are about getting the most impressions for advertising.

Now, it’s also about training artificial intelligences not just on user content, but also user interactions with each other and the platform – and whatever connects to the platform.

Facebook

My reasoning for recently leaving Facebook and cutting information going to Meta has some bearing on this. It started off simply enough for me almost 20 years ago; my then boss strongly suggested we get on Facebook since, as he put it, “it’s the future”.

It’s peculiar how people say, “it’s the future” without some qualifier of good or bad.

That the Facebook walled garden is now home of spamming AI generated spam through it’s algorithms has recently been outed: I had my suspicions as a user for some time. Worse, the arbitrary restrictions of user accounts has made the platform untenable for regular users, where sharing content from one group to another (cross-pollination) is apparently presently seen as a negative by the algorithms rather than a positive.

That it remains so used is a matter of inertia. When paid placement started on Facebook, it was an indicator that either paid placement was the de facto algorithm for user content or it would become it. The advertising you see isn’t necessarily for good products, it’s paid for by people who understand that if 100 people click out of 1 million and you get $1 from each one, you get $100. All you have to do is incessantly market a poor product and pay less than $100 in advertising: The standard Web 2.0 model.

In time, people will realize that they don’t need that platform to share information.

Twitter, now X

Twitter is not something I ever really liked because it is based on the same technologies that a few of us implemented for the Alert Retrieval Cache a year prior to Twitter being formed. My push was for it to be used for disaster communication, and in doing that I found a lot of issues related to trusted sources that I couldn’t work out. I stopped pushing on it til I had resolution, but in paying the bills that went to the back burner.

It’s no wonder that fake news became an issue on platforms, Twitter inclusive, because it was patently obvious that even accidentally less-than-trusted-sources could send messages that echoed across the Internet. When I did dip beyond my toes in supporting Ukraine on Twitter, I found a lot of propaganda (yes, both sides), a lot of hate, and even some racism. It was a cesspool of humanity’s most hateful things and from what I have observed in the feed when I do log in, I see it seems to have gotten worse.

People either praise or blame Elon Musk. He certainly hasn’t made it better, and his overt hypocrisy regarding free speech echoes across the Internet. Regardless, the platform was inherently flawed, particularly when it was time for it to start making money.

LinkedIn

LinkedIn started well enough as a place where people could post a more dynamic version of a resume. It was, at the time, a great idea. It also soon became a bit of a joke because when one was employed, one is connected to the people in the company and when you update your LinkedIn profile – you pretty much let the company you were presently working for that you’re shopping around. Therefore, it became pretty useless because it didn’t really afford that level of privacy one needs to shop employers.

It’s a harsh reality. Updating your LinkedIn profile could well have an employer looking for a way to get rid of you before you got rid of them. It works best if you’re unemployed, and as someone who has been in the professional arena for some decades, I can tell you that I never got a position through LinkedIn. All I got was loads of recruiter spam from people who obviously had not actually read my profile. Once, many years ago, I did some Java programming and never touched the language since. Up to last year I was getting recruiter spam about Java programming which required years of experience which… of course… I do not have. Little items like that through decades of software engineering became fodder for people claiming to be recruiters spamming me.

To their credit, they don’t seem to make money from advertising, but instead through selling ‘premium’ which I did try for a while. It wasn’t worth it to me, and I wouldn’t suggest it to anyone. Instead, I got positions through personal connections – real connections, registering with real headhunters, and even Craigslist for 2 software engineering positions. LinkedIn is just too easily gamed, and too easily a liability for employed people looking for a new position.

What they’re doing now, it would seem, is getting people to write articles on LinkedIn so that Microsoft can train it’s AI on them. It’s successful because people believe publishing on LinkedIn helps them find new positions when instead it helps AIs write better at no cost.

It didn’t take long for getting spam connections, considering I have a decent profile and a fair amount of connections. People wanting to sell me stuff, and even worse, the few who ask for sweat equity.

It’s just another walled garden that has become a litterbox.

There Are Others.

Instagram, TikTok, etc, all pretty much do the same thing at this point with different flavors of litter for the litterbox. If you find value in these walled gardens, that’s fair and you should do as you see fit.

There are an increasing amount of people who just feel stuck in them, and having invested so much time and energy into them. This is the ‘Sunk Cost Fallacy‘: the tendency to follow through on an something if we have already invested time, effort, or money into it, whether or not the current costs outweigh the benefits or value.

This is about having you chase the laser pointer in the hope that one day you’ll catch the red dot. You are a revenue stream in a walled garden, not a customer.

So What To Do?

If you can use your time on something productive, do that instead. The walled gardens become prisons because of the Sunk Cost Fallacy, with no parole. The only way out is to break out.

Past, Present, and Future: Some thoughts On Intelligence.

One of the underlying concepts of Artificial Intelligence, as the name suggests, is intelligence. A definition of intelligence that fits this bit of writing is from a John Hopkins Q&A:

“…Intelligence can be defined as the ability to solve complex problems or make decisions with outcomes benefiting the actor, and has evolved in lifeforms to adapt to diverse environments for their survival and reproduction. For animals, problem-solving and decision-making are functions of their nervous systems, including the brain, so intelligence is closely related to the nervous system…”

Q&A – What Is Intelligence?“, Daeyeol Lee PhD, as quoted by Annika Weder, 5 October 2020.

This definition fits well, because in all the stuff about different writings related to different kinds of intelligences and human intelligence itself, the words of Arthur C. Clarke echo.

I’m not saying that what he wrote is right as much as it should make us think. He was good about making people think. The definition of intelligence above actually stands Clarke’s quote on it’s head because it ties intelligence to survival. In fact, if we are going to really discuss intelligence, the only sort of intelligence that matter is related to survival. It’s not about the individual as much as the species.

We only talk about intelligence in other ways because of our society, the education system, and it’s largely self-referential in those regards. Someone who can solve complex physics equations might be in a tribe in the Amazon right now, but if they can’t hunt or add value to their tribe, all of that intelligence – as high as some might think it is – means nothing. Their tribe might think of that person as the tribal idiot.

It’s about adapting and survival. This is important because of a paper that I read last week that gave me pause about the value-laden history of intelligence that causes the discussion of intelligence to fold in on itself:

“This paper argues that the concept of intelligence is highly value-laden in ways that impact on the field of AI and debates about its risks and opportunities. This value-ladenness stems from the historical use of the concept of intelligence in the legitimation of dominance hierarchies. The paper first provides a
brief overview of the history of this usage, looking at the role of intelligence in patriarchy, the logic of colonialism and scientific racism. It then highlights five ways in which this ideological
legacy might be interacting with debates about AI and its risks and opportunities: 1) how some aspects of the AI debate perpetuate the fetishization of intelligence; 2) how the fetishization of intelligence impacts on diversity in the technology industry; 3) how certain hopes for AI perpetuate notions of technology and the mastery of nature; 4) how the association of intelligence with the professional class misdirects concerns about AI; and 5) how the equation of intelligence and dominance fosters fears of superintelligence. This paper therefore takes a first step in bringing together the literature on
intelligence testing, eugenics and colonialism from a range of disciplines with that on the ethics and societal impact of AI.”

The Problem with Intelligence: Its Value-Laden History and the Future of AI” (Abstract), Stephen Cave, Leverhulme Centre for the Future of Intelligence University of Cambridge, 07 February 2020.

It’s a thought provoking read, and one with some basis, citing examples from what should be considered the dark ages of society that still perpetuate within modern civilization in various ways. One image can encapsulate much of the paper:

Source: https://dl.acm.org/doi/10.1145/3375627.3375813

The history of how intelligence has been used, and even become an ideology, has deep roots that go back in the West as far back as Plato. It’s little wonder that there is apparent rebellion against intelligence in modern society.

I’ll encourage people to read the paper itself – it has been cited numerous times. It lead me to questions about how this will impact learning models, since much that is out there inherits much of the value laden history demonstrated in the paper.

When we talk about intelligence of any sort, what exactly are we talking about? And when we discuss artificial intelligence, what man-made parts should we take with a grain of salt?

If the thought doesn’t bother you, maybe it should, because the only real intelligence that seems to matter is related to survival – and using intelligence ideologically is about the survival of those that prosper in the systems impacted by the ideology of intelligence – which includes billionaires, these days.