The Future of Social Media: Why Decentralizing and the Fediverse Matter Now More Than Ever

There was a time before social media and social networks as we know them, where people would talk to each other in person, isolated by geography. Then we figured out how to send our writing, and there was a period when pen-pals and postcards were important. News organizations adopted technology faster as reports came in from an ever increasing geography until, finally, we ran out of geography.

Social media has become an integral part of our lives, connecting us to friends, families, communities, and global events in ways that were unimaginable just a few decades ago. Yet, as platforms like Facebook, Instagram, and Twitter (now X) dominate our digital landscape, serious questions arise about privacy, control, and freedom. Who owns our data? How are algorithms shaping our perceptions? Are we truly free in these spaces? Are we instead slaves to the algorithms?

It’s time to rethink social media. Enter decentralization and the Fediverse—a revolutionary approach to online networking that prioritizes freedom, community, and individual ownership.

The Problem with Centralized Social Media And Networks

At their core, mainstream social media platforms operate on a centralized model. They are controlled by corporations with one primary goal: profit. This model creates several challenges:

  1. Privacy Violations: Your data – likes, shares, private messages—becomes a commodity, sold to advertisers and third parties.
  2. Algorithmic Control: Centralized platforms decide what you see, often prioritizing sensational or divisive content to keep you engaged longer.
  3. Censorship: Content moderation decisions are made by corporations, leading to debates about free speech and fair enforcement of rules.
  4. Monopolization: A handful of companies dominate the space, stifling innovation and giving users little choice.

All of this comes to the fore with the recent issues in the United States surrounding Tik-Tok, which Jon Oliver recently mentioned on his show and which I mentioned here on KnowProSE.com prior. The same reasons that they want to ban TikTok are largely the same things other social networks already do – it’s just who they do it for or could potentially do it for. Yes, they are as guilty as any other social network of the same problems above.

These are real issues, too, related to who owns what regarding… you. They often leave you looking at the same kind of content and drag you down a rabbit hole while simply supporting your biases, and should you step out of line, you might find your reach limited or in some cases completely taken away. These issues have left many users feeling trapped, frustrated, and disillusioned.

Recently, there has been a reported mass exodus from one controlled network to another – from Twitter to BlueSky.

There’s a better way.

What Is the Fediverse?

The Fediverse (short for “federated universe”) is a network of interconnected, decentralized platforms that communicate using open standards. Unlike traditional social media, the Fediverse is not controlled by a single entity. Instead, it consists of independently operated servers—called “instances”—that can interact with each other.

Popular platforms within the Fediverse include:

  • Mastodon: A decentralized alternative to Twitter.
  • Pixelfed: An Instagram-like platform for sharing photos.
  • Peertube: A decentralized video-sharing platform.
  • WriteFreely: A blogging platform with a focus on minimalism and privacy.

These platforms empower users by giving them control over their data, their communities, and their online experiences.


Why Decentralization Matters

  1. Data Ownership: In the Fediverse, your data stays under your control. Each server is independently operated, and many prioritize privacy and transparency.
  2. Freedom of Choice: You can choose or create a server that aligns with your values. If you don’t like one instance, you can switch to another without losing your connections.
  3. Resilience Against Censorship: No single entity has the power to shut down the entire network.
  4. Community-Centric: Instead of being shaped by algorithms, communities in the Fediverse are human-driven and often self-moderated.

How You Can Join the Movement

  1. Explore Fediverse Platforms: Start by creating an account on Mastodon or another Fediverse platform. Many websites like joinmastodon.org can help you find the right instance.
  2. Support Decentralization: Advocate for open standards and decentralized technologies in your circles.
  3. Educate Others: Share the benefits of decentralization with your friends and family. Help them see that alternatives exist.
  4. Contribute to the Ecosystem: If you’re tech-savvy, consider hosting your own instance or contributing to open-source projects within the Fediverse.

The Call to Action

Social media doesn’t have to be controlled by a handful of tech giants. The Fediverse represents a vision for a better internet—one that values privacy, freedom, and genuine community. By choosing decentralized platforms, you’re taking a stand for a more equitable digital future.

So, what are you waiting for? Explore the Fediverse, join the conversation, and help build a social media landscape that works for everyone, not just the corporations.

Take the first step today. Decentralize your social media life and reclaim your digital freedom!

joinmastodon.org

Beyond A Widowed Voice.

By now, the news that Scarlett Johansson’s issues with OpenAI and the voice that sounds like her have made the rounds. She’s well known and regardless of one’s interests, she’s likely to pop up in various contexts. However, she’s not the first.

While different in some ways, voice actors Paul Skye Lehrman and Linnea Sage are suing Lovo for similar reasons. They got hired to do some work that they thought were one off voice overs, then heard their voices saying things they had never said. To the point, they heard their voices doing something that they didn’t get paid for.

The way they found out was oddly poetic.

Last summer, as they drove to a doctor’s appointment near their home in Manhattan, Paul Skye Lehrman and Linnea Sage listened to a podcast about the rise of artificial intelligence and the threat it posed to the livelihoods of writers, actors and other entertainment professionals.

The topic was particularly important to the young married couple. They made their living as voice actors, and A.I. technologies were beginning to generate voices that sounded like the real thing.

But the podcast had an unexpected twist. To underline the threat from A.I., the host conducted a lengthy interview with a talking chatbot named Poe. It sounded just like Mr. Lehrman.

“He was interviewing my voice about the dangers of A.I. and the harms it might have on the entertainment industry,” Mr. Lehrman said. “We pulled the car over and sat there in absolute disbelief, trying to figure out what just happened and what we should do.”

What Do You Do When A.I. Takes Your Voice?, Cade Metz, New York Times, May 16th, 2024.

They aren’t sex symbols like Scarlett Johansson. They weren’t the highest paid actresses in 2018 and 2019. They aren’t seen in major films. Their problem is just as real, just as audible, but not quite as visible. Forbes covered the problems voice actors faced in October of 2023.

…Clark, who has voiced more than 100 video game characters and dozens of commercials, said she interpreted the video as a joke, but was concerned her client might see it and think she had participated in it — which could be a violation of her contract, she said.

“Not only can this get us into a lot of trouble if people think we said [these things], but it’s also, frankly, very violating to hear yourself speak when it isn’t really you,” she wrote in an email to ElevenLabs that was reviewed by Forbes. She asked the startup to take down the uploaded audio clip and prevent future cloning of her voice, but the company said it hadn’t determined that the clip was made with its technology. It said it would only take immediate action if the clip was “hate speech or defamatory,” and stated it wasn’t responsible for any violation of copyright. The company never followed up or took any action.

“It sucks that we have no personal ownership of our voices. All we can do is kind of wag our finger at the situation,” Clark told Forbes

Keep Your Paws Off My Voice’: Voice Actors Worry Generative AI Will Steal Their Livelihoods, Rashi Shrivastava, Forbes.com, October 9th, 2023.

As you can see – the whole issue is not new. It just became more famous because of a more famous face, and involves OpenAI, a company that has more questions about their training data than ChatGPT can answer, so the story has sung from rooftops.

Meanwhile, some are trying to license the voices of dead actors.

Sony recently warned AI companies about unauthorized use of the content they own, but when one’s content is necessarily public, how do you do that?

How much of what you post, from writing to pictures to voices in podcasts and family videos, can you control? It costs nothing, but it costs futures of individuals. And when it comes to training models, these AI companies are eroding the very trust they need from those that they want to sell their product to – unless they’re just enabling talentless and incapable hacks to take over jobs that talented and capable people have already do.

We have more questions than answers, and the trust erodes as more and more people are impacted.

AI, Democracy, India.

India is the world’s most populous democracy, and there has been a lot going on related to religion that is well beyond the scope of this, but deserves mention because violence has been involved.

The Meta Question.

In the latest news, Meta stands accused of approving political ads on it’s platforms of Instagram and Facebook that have incited violence.

This, apparently, was a test, according to TheGuardian.

How this happened seems a little strange and is noteworthy1:

“…The adverts were created and submitted to Meta’s ad library – the database of all adverts on Facebook and Instagram – by India Civil Watch International (ICWI) and Ekō, a corporate accountability organisation, to test Meta’s mechanisms for detecting and blocking political content that could prove inflammatory or harmful during India’s six-week election…”

Revealed: Meta approved political ads in India that incited violence, Hannah Ellis-Petersen in Delhi, TheGuardian, 20 May 2024.

It’s hard to judge the veracity of the claim based on what I dug up (see the footnote). TheGuardian must have more from their sources for them to be willing to publish the piece – I have not seen this before with them – so I’ll assume good and see how this pans out.

Meta claims to be making efforts to minimize false information, but Meta also doesn’t have a great track record.

The Deepfake Industry of India.

Wired.com also has a story that has some investigation in it that does not relate to Meta.

Indian Voters Are Being Bombarded With Millions of Deepfakes. Political Candidates Approve2 by Wired.com goes into great detail about Divyendra Singh Jadoun and how his business is doing well.

“…Across the ideological spectrum, they’re relying on AI to help them navigate the nation’s 22 official languages and thousands of regional dialects, and to deliver personalized messages in farther-flung communities. While the US recently made it illegal to use AI-generated voices for unsolicited calls, in India sanctioned deepfakes have become a $60 million business opportunity. More than 50 million AI-generated voice clone calls were made in the two months leading up to the start of the elections in April—and millions more will be made during voting, one of the country’s largest business messaging operators told WIRED.

Jadoun is the poster boy of this burgeoning industry. His firm, Polymath Synthetic Media Solutions, is one of many deepfake service providers from across India that have emerged to cater to the political class. This election season, Jadoun has delivered five AI campaigns so far, for which his company has been paid a total of $55,000. (He charges significantly less than the big political consultants—125,000 rupees [$1,500] to make a digital avatar, and 60,000 rupees [$720] for an audio clone.) He’s made deepfakes for Prem Singh Tamang, the chief minister of the Himalayan state of Sikkim, and resurrected Y. S. Rajasekhara Reddy, an iconic politician who died in a helicopter crash in 2009, to endorse his son Y. S. Jagan Mohan Reddy, currently chief minister of the state of Andhra Pradesh. Jadoun has also created AI-generated propaganda songs for several politicians, including Tamang, a local candidate for parliament, and the chief minister of the western state of Maharashtra. “He is our pride,” ran one song in Hindi about a local politician in Ajmer, with male and female voices set to a peppy tune. “He’s always been impartial.”…”

Indian Voters Are Being Bombarded With Millions of Deepfakes. Political Candidates Approve“, Nilesh Christopher & Varsha Bansal, Wired.com, 20 May 2024.

Al Jazeera has a video on this as well.

In the broader way it is being used, audio deepfakes have people really believing that they were called personally by candidates. This has taken robo-calling to a whole new level3.

What we are seeing is the manipulation of opinions in a democracy through AI, and it’s something that while happening in India now is certainly worth being worried about in other nations. Banning something in one country, or making it illegal, does not mean that foreign actors won’t do it where the laws have no hold.

Given India’s increasing visible stance in the world, we should be concerned, but given AI’s increasing visibility in global politics to shape opinions, we should be very worried indeed. This is just what we see. What we don’t see is the data collected from a lot of services, and how they can be used to decide who is most vulnerable to particular types of manipulation, and what that means.

We’ve built a shotgun from instructions on the Internet and have now loaded it and pointed it at the feet of our democracies.

  1. Digging into the referenced report itself (PDF), there’s no ownership of the report itself within the document, though it is on the Eko.org web server – with no links to it from the site itself at the time of this writing. There’s nothing on the India Civil Watch International (ICWI) website at the time of this writing either.

    That’s pretty strange. The preceding report referenced in the article is here on LondonStory.org. Neither the ICWI or Eko websites seem to have that either. Having worked with some NGOs in the Caribbean and Latin America, I know that they are sometimes slow to update websites, so we’ll stick a pin in it. ↩︎
  2. Likely paywalled if you’re not a Wired.com subscriber, and no quotes would do it justice. Links to references provided. ↩︎
  3. I worked for a company that was built on robocalling, but went to higher ground with telephony by doing emergency communications instead, so it is not hard for me to imagine how AI can be integrated into it. ↩︎

AI: Technology, Skepticism, and Your Underwear.

Here are two images depicting futuristic underwear powered by AI technology. The designs are sleek and modern, featuring smart fibers and sensors, with a minimalist setting to emphasize the advanced technology.
Here are two images depicting futuristic underwear powered by AI technology. The designs are sleek and modern, featuring smart fibers and sensors, with a minimalist setting to emphasize the advanced technology.

There’s a balance between technology and humanity that at least some of us think is out of desirable limits now. In fact, it’s been out of limits for some time, and to illustrate the fact I had DALL-E generate some images of AI powered underwear – because if technology were resting close to one’s nether regions, it might be something one would be more careful about – from privacy to quality control.

“…Those social media companies, again, offered convenience and an — as well know — to good to be true promise of free and open access. We closed our blogs, got in line and ceded control of our social graphs. Drawbridges were rolled up, ads increased and nobody left — at least not in droves. Everyone is there so everyone stays.

Journalists and newspapers were drawn in, promised an audience and were gifted with capricious intermediaries that destroyed the profession and industry.

We lost our handle on what is and was true, stopped having conversations and started yelling at their representations. It became easier to shout at someone on line than it was to have a healthier discourse..”

“The tech industry doesn’t deserve optimism it has earned skepticism”, Cory Dransfeldt, CoryD.Dev, May 6th, 2024

Cory writes quote poignantly in that article of the promises made by technology in the past. In that excerpt, he also alludes to what I call the ‘Red Dots‘ that keep us distracted, increasingly feral, and rob us of what is really important to us – or should be.

This melds well with the point in a Scientific American opinion I read today, particularly the aspect of AI and social media:

…A Facebook whistleblower made this all clear several years ago. To meet its revenue goals, the platform used AI to keep people on the platform longer. This meant finding the perfect amount of anger-inducing and provocative content, so that bullying, conspiracies, hate speech, disinformation and other harmful communications flourished. Experimenting on users without their knowledge, the company designed addictive features into the technology, despite knowing that this harmed teenage girls. A United Nations report labeled Facebook a “useful instrument” for spreading hate in an attempted genocide in Myanmar, and the company admitted the platform’s role in amplifying violence. Corporations and other interests can thus use AI to learn our psychological weaknesses, invite us to the most insecure version of ourselves and push our buttons to achieve their own desired ends…

AI Doesn’t Threaten Humanity. Its Owners Do” , Joseph Jones, Scientific American, May 6th, 2024.

Again, things I have written about regarding AI, which connects to social media, which connects to social media, which connects to you, gentle reader, your habits, your information, your privacy. In essence, your life.

I’d say we’re on the cusp of something, but it’s the same cusp. We can and should be skeptical of these companies trying to sell us on a future that they promise us but have never really given us.

There’s a flood of AI marketing, silly AI products, and sillier ideas about AI that confound me, like AI chatbots in the publishing industry so you can chat with a bot about a book you read.

There’s silly, and there’s worrisome, like teens becoming addicted to AI chatbots – likely because there’s less risk than dealing with other human beings, the possible feelings of rejection, the anxiety associated with it, and the very human aspect of…

Being human.

I have advocated and will continue to advocate for sensible use of technology, but I’m not seeing as much of it mixed in with the marketing campaigns that suck the air out of the room, filling it with a synthetic ether that has the potential to drown who we are rather than help us maximize who we can be.

We should be treating new technology like underwear we will put next to our nether regions. Carefully. Be picky. And if it’s too good to be true – it likely is. And if the person selling it to you is known for shortcuts, expect uncomfortable drafts around your nether regions.

I’d grown tired of writing about it, and thank those whose articles got mentioned for giving me that second wind.

Paying To Whitewash The Fence of AI.

I suppose a lot of people may not have read Tom Sawyer since it has been banned here and there. Yet there is a part of the book that seems really appropriate today and is unfortunate people didn’t read. It’s a great con.

It’s in Chapter 2 that Tom Sawyer gets punished and has to whitewash a fence for his Aunt Polly, and when mocked about his punishment by another boy, he claims whitewashing the fence is fun. It’s so fun, in fact, that the other kid gives Tom an apple (an initial offer was the apple core, I believe), and so Tom pulled this con on other kids and got their treasures while they painted the fence. He got ‘rich’ and had fun at their expense while they did his penance.

That’s what’s happening with social media like Facebook, LinkedIn, Twitter, etc.

Videos, text, everything being generated on these social networks is being used to train generative AI that you can use for free – at least for now – while others pay and subscribe to get the better trained versions.

It’s a pretty good con that I suppose people didn’t read about. It’s a classic con.

Some people will complain when the AI’s start taking over whitewashing the fences, or start whitewashing their children.

Meanwhile, these same companies are selling metaphorical paint and brushes.

I suppose this is why reading is important.

Oddly, the premise of the ban of “The Adventures of Tom Sawyer” was “when librarians said they found Mr. Sawyer to be a “questionable” protagonist in terms of his moral character.”

Happy Painting.

Critical Thinking In The Age Of AI.

Critical thinking is the ability to suspend judgement, and to consider evidence, observations and perspectives in order to form a judgement, requiring rational, skeptical and unbiased analysis and evaluation.

It’s can be difficult, particularly being unbiased, rational and skeptical in a world that seems to require responses from us increasingly quickly.

Joe Árvai, a psychologist who has done research on decision making, recently wrote an article about critical thinking and artificial intelligence.

“…my own research as a psychologist who studies how people make decisions leads me to believe that all these risks are overshadowed by an even more corrupting, though largely invisible, threat. That is, AI is mere keystrokes away from making people even less disciplined and skilled when it comes to thoughtful decisions.”

The hidden risk of letting AI decide – losing the skills to choose for ourselves‘”, Joe Árvai, TheConversation, April 12, 2024

It’s a good article, well worth the read, and it’s in the vein of what I have been writing recently about ant mills and social media. Web 2.0 was built on commerce which was built on marketing. Good marketing is about persuasion (a product or service is good for the consumer), bad marketing is about manipulation (where a product or service is not good for the consumer). It’s hard to tell the difference between the two.

Inputs and Outputs.

We don’t know exactly how much of Web 2.0 was shoveled into the engines of generative AI learning models, but we do know that chatbots and generative AI have become considered more persuasive than humans. In fact, ChatGPT 4 is presently considered 82% more persuasive than humans, as I mentioned in my first AI roundup.

This should at least be a little disturbing, particularly when there are already sites telling people how to get GPT4 to create more persuasive content, such as this one, and yet the key difference between persuasion and manipulation is whether it’s good for the consumer of the information or not – a key problem with fake news.

Worse, we have all seen products and services that had brilliant marketing but were not good products or services. If you have a bunch of stuff sitting and collecting dust, you fell victim to marketing, and arguably, manipulation rather than persuasion.

It’s not difficult to see that the marketing of AI itself could be persuasive or manipulative. If you had a tool that could persuade people they need the tool, wouldn’t you use it? Of course you would. Do they need it? Ultimately, that’s up to the consumers, but if they in turn are generating AI content that feeds the learning models in what is known as synthetic data, it creates it’s own problems.

Critical Thought

Before generative AI became mainstream, we saw issues with people sharing fake news stories because they had catchy headlines and fed a confirmation bias. A bit of critical thought applied could have avoided much of that, but it still remained a problem. Web 2.0 to present has always been about getting eyes on content quickly so advertising impressions increased, and some people were more ethical about that than others.

Most people don’t really understand their own biases, but social media companies implicitly do – we tell them with our every click, our every scroll.

This is compounded by the scientific evidence that attention spans are shrinking. On average, based on research, the new average attention span is 47 seconds. That’s not a lot of time to do critical thinking before liking or sharing something.

While there’s no real evidence that there is more or less critical thought that could be found, the diminished average attention span is a solid indicator that on average, people are using less critical thought.

“…Consider how people approach many important decisions today. Humans are well known for being prone to a wide range of biases because we tend to be frugal when it comes to expending mental energy. This frugality leads people to like it when seemingly good or trustworthy decisions are made for them. And we are social animals who tend to value the security and acceptance of their communities more than they might value their own autonomy.

Add AI to the mix and the result is a dangerous feedback loop: The data that AI is mining to fuel its algorithms is made up of people’s biased decisions that also reflect the pressure of conformity instead of the wisdom of critical reasoning. But because people like having decisions made for them, they tend to accept these bad decisions and move on to the next one. In the end, neither we nor AI end up the wiser…”

The hidden risk of letting AI decide – losing the skills to choose for ourselves‘”, Joe Árvai, TheConversation, April 12, 2024

In an age of generative artificial intelligence that is here to stay, it’s paramount that we understand ourselves better as individuals and collectively so that we can make thoughtful decisions.

A Quick Look At Perplexity.AI

The Perplexity.AI logo, from their press pack.

In fiddling about on Mastodon, I came across a post linking to an article on Forbes: “‘Like Wikipedia And ChatGPT Had A Kid’: Inside The Buzzy AI Startup Coming For Google’s Lunch“.

Well, that deserved a look if only because search engine results across the board give spammy first pages, Google inclusive, and Wikipedia is a resource that I like because of one main thing: citations.

ChatGPT is… well, it’s interesting, but it’s… limited because you have to double check everything from your prompt to the results. So the idea of mixing the two is definitely attractive.

Thus, I ended up at Perplexity.ai and did some searches, some tricky ones that I know that other search engines often get wrong related to me. Perplexity stumbled in the results but cited the sources that pushed it the wrong way, as well as cited the sources that pushed it the right way.

That’s perfect for me right now. It gives the citations above the response, so you know where stuff is coming from. You can then omit citations that are wrong while drilling down into what it is you’re supposed to be looking into. For me, with the amount of research I do, this saves me a whole lot of tabs in my web browser and therefore allows me a mental health bonus in keeping track of what I’m writing about.

Of course, when I find something useful like this, I put it under bright lights and interrogate it because on impulse I almost subscribed immediately. I’ve held off, at least for now, but so far it has me pondering my ChatGPT4 subscription since it’s much more of what I need and much less of what I don’t. When I am researching things to write, I need to be able to drill down and not be subject to hallucinations. I need the sources. ChatGPT can do that, and ChatGPT gives me access to DALL-E, but how many images do I need? How often do I use ChatGPT? Not that much, really.

I’m also displeased with present behemoth search engines, particularly since they collect information. Does Perplexity.ai collect information on users? According to the Perplexity.AI privacy policy, they do not. That’s at least hopeful. In the shifting landscape of user data, it’s hard to say what the future holds with any company. A buyout, change in management or a shift in the wind of a public toilet could cause policies to change, so we constantly have to keep an eye on that, but in the immediate, it is promising.

My other main query was about the Fediverse, which is notoriously not indexed. This is largely because of the nature of the Fediverse. It didn’t have much on that, as I expected.

I’ll be using it anonymously for a while to see how it works for me. If you’re looking for options for researching topics, Perplexity.ai may be worth a look.

Otherwise I would not have written about it.

Echo Chambers, Ant Mills and Social Networks.

There’s been an ant mill, sometimes called a death spiral, that had gone viral on social media some time ago. It’s a real thing.

Army ants follow by pheromones, and if they get separated from the main group, they can end up following each other in circles until they either find a path out or die of exhaustion.

It’s mesmerizing to watch. It’s almost like they’re being held in orbit by something like gravity, but that’s not the case. They’re just lost ants, going in circles.

I’ve seen enough things on the Internet come and go to see a sort of commonality.

It’s actually a pretty good metaphor for echo chambers in social media. Social media isn’t just singular echo chambers, the echo chambers are based on attributes.

If you like, as an example, dogs, you can get into an echo chamber of dog memes. If you also happen to like cats – it is possible to like both – you can get into an echo chamber of cat memes. These are pretty benign echo chambers, but when you start seeing the same memes over and over, you can be pretty sure that echo chamber is in a spiral. You lose interest. You leave, finding a different ‘pheromone trail’ to follow or just… taking a hard right when everyone is going left.

With centralized social networks such as Facebook or Twitter X, algorithms feed these echo chambers that connect people. When those echo chambers become stale, the connections with others within the echo chamber remain and before you know it you have the equivalent of an ant mill of humans in a social network. To stay in that echo chamber, critical thinking is ignored and confirmation biases are fed.

This also accelerates when the people who provide content to the echo chambers – the pheromones, if you will – leave. Some might follow them elsewhere, but the inertia keeps many there until… well, until they’re exhausted.

This seems a logical conclusion to the algorithmic display of content, or promoting certain posts all the time in search results.

Do you find yourself using the same apps, doing the same things over and over with diminishing returns of happiness (assuming there is happiness in your echo chamber)? Does it seem like you’ve seen that meme before?

You might be in a spiral.

Get out. Don’t die sniffing someone else’s posterior.

From Inputs to The Big Picture: An AI Roundup

This started off as a baseline post regarding generative artificial intelligence and it’s aspects and grew fairly long because even as I was writing it, information was coming out. It’s my intention to do a ’roundup’ like this highlighting different focuses as needed. Every bit of it is connected, but in social media postings things tend to be written of in silos. I’m attempting to integrate since the larger implications are hidden in these details, and will try to stay on top of it as things progress.

It’s long enough where it could have been several posts, but I wanted it all together at least once.

No AI was used in the writing, though some images have been generated by AI.

The two versions of artificial intelligence on the table right now – the marketed and the reality – have various problems that make it seem like we’re wrestling a mating orgy of cephalopods.

The marketing aspect is a constant distraction, feeding us what helps with stock prices and good will toward those implementing the generative AIs, while the real aspect of these generative AIs is not really being addressed in a cohesive way.

To simplify this, this post breaks it down into the Input, the Output, and the impacts on the ecosystem the generative AIs work in.

The Input.

There’s a lot that goes into these systems other than money and water. There’s the information used for the learning models, the hardware needed, and the algorithms used.

The Training Data.

The focus so far has been on what goes into their training data, and that has been an issue including lawsuits, and less obviously, trust of the involved companies.

…The race to lead A.I. has become a desperate hunt for the digital data needed to advance the technology. To obtain that data, tech companies including OpenAI, Google and Meta have cut corners, ignored corporate policies and debated bending the law, according to an examination by The New York Times…

How Tech Giants Cut Corners to Harvest Data for A.I.“, Cade MetzCecilia KangSheera FrenkelStuart A. Thompson and Nico Grant, New York Times, April 6, 2024 1

Of note, too, is that Google has been indexing AI generated books, which is what is called ‘synthetic data’ and has been warned against, but is something that companies are planning for or even doing already, consciously and unconsciously.

Where some of these actions are questionably legal, they’re not as questionably ethical to some, thus the revolt mentioned last year against AI companies using content without permission. It’s of questionable effect because no one seems to have insight into what the training data consists of, and there seems no one is auditing them.

There’s a need for that audit, if only to allow for trust.

…Industry and audit leaders must break from the pack and embrace the emerging skills needed for AI oversight. Those that fail to address AI’s cascading advancements, flaws, and complexities of design will likely find their organizations facing legal, regulatory, and investor scrutiny for a failure to anticipate and address advanced data-driven controls and guidelines.

Auditing AI: The emerging battlefield of transparency and assessment“, Mark Dangelo, Thomson Reuters, 25 Oct 2023.

While everyone is hunting down data, no one seems to be seriously working on oversight and audits, at least in a public way, though the United States is pushing for global regulations on artificial intelligence at the UN. The status of that hasn’t seemed to have been updated, even as artificial intelligence is being used to select targets in at least 2 wars right now (Ukraine and Gaza).

There’s an imbalance here that needs to be addressed. It would be sensible to have external auditing of learning data models and the sources, as well as the algorithms involved – and just get get a little ahead, also for the output. Of course, these sorts of things should be done with trading on stock markets as well, though that doesn’t seem to have made as much headway in all the time that has been happening either.

Some websites are trying to block AI crawlers, and it is an ongoing process. Blocking them requires knowing who they are and doesn’t guarantee bad actors might not stop by.

There is a new Bill that being pressed in the United States, the Generative AI Copyright Disclosure Act, that is worth keeping an eye on:

“…The California Democratic congressman Adam Schiff introduced the bill, the Generative AI Copyright Disclosure Act, which would require that AI companies submit any copyrighted works in their training datasets to the Register of Copyrights before releasing new generative AI systems, which create text, images, music or video in response to users’ prompts. The bill would need companies to file such documents at least 30 days before publicly debuting their AI tools, or face a financial penalty. Such datasets encompass billions of lines of text and images or millions of hours of music and movies…”

New bill would force AI companies to reveal use of copyrighted art“, Nick Robins-Early, TheGuardian.com, April 9th, 2024.

Given how much information is used by these companies already from Web 2.0 forward, through social media websites such as Facebook and Instagram (Meta), Twitter, and even search engines and advertising tracking, it’s pretty obvious that this would be in the training data as well.

The Algorithms.

The algorithms for generative AI are pretty much trade secrets at this point, but one has to wonder at why so much data is needed to feed the training models when better algorithms could require less. Consider a well read person could answer some questions, even as a layperson, with less of a carbon footprint. We have no insight into the algorithms either, which makes it seem as though these companies are simply throwing more hardware and data at the problem than being more efficient with the data and hardware that they already took.

There’s not much news about that, and it’s unlikely that we’ll see any. It does seem like fuzzy logic is playing a role, but it’s difficult to say to what extent, and given the nature of fuzzy logic, it’s hard to say whether it’s implementation is as good as it should be.

The Hardware

Generative AI has brought about an AI chip race between Microsoft, Meta, Google, and Nvidia, which definitely leaves smaller companies that can’t afford to compete in that arena at a disadvantage so great that it could be seen as impossible, at least at present.

The future holds quantum computing, which could make all of the present efforts obsolete, but no one seems interested in waiting around for that to happen. Instead, it’s full speed ahead with NVIDIA presently dominating the market for hardware for these AI companies.

The Output.

One of the larger topics that has seemed to have faded is regarding what was called by some as ‘hallucinations’ by generative AI. Strategic deception was also something that was very prominent for a short period.

There is criticism that the algorithms are making the spread of false information faster, and the US Department of Justice is stepping up efforts to go after the misuse of generative AI. This is dangerous ground, since algorithms are being sent out to hunt products of other algorithms, and the crossfire between doesn’t care too much about civilians.2

The impact on education, as students use generative AI, education itself has been disrupted. It is being portrayed as an overall good, which may simply be an acceptance that it’s not going away. It’s interesting to consider that the AI companies have taken more content than students could possibly get or afford in the educational system, which is something worth exploring.

Given that ChatGPT is presently 82% more persuasive than humans, likely because it has been trained on persuasive works (Input; Training Data), and since most content on the internet is marketing either products, services or ideas, that was predictable. While it’s hard to say how much content being put into training data feeds on our confirmation biases, it’s fair to say that at least some of it is. Then there are the other biases that the training data inherits through omission or selective writing of history.

There are a lot of problems, clearly, and much of it can be traced back to the training data, which even on a good day is as imperfect as our own imperfections, it can magnify, distort, or even be consciously influenced by good or bad actors.

And that’s what leads us to the Big Picture.

The Big Picture

…For the past year, a political fight has been raging around the world, mostly in the shadows, over how — and whether — to control AI. This new digital Great Game is a long way from over. Whoever wins will cement their dominance over Western rules for an era-defining technology. Once these rules are set, they will be almost impossible to rewrite…

Inside the shadowy global battle to tame the world’s most dangerous technology“, Mark Scott, Gian Volpicelli, Mohar Chatterjee, Vincent Manancourt, Clothilde Goujard and Brendan Bordelon, Politico.com, March 26th, 2024

What most people don’t realize is that the ‘game’ includes social media and the information it provides for training models, such as what is happening with TikTok in the United States now. There is a deeper battle, and just perusing content on social networks gives data to those building training models. Even WordPress.com, where this site is presently hosted, is selling data, though there is a way to unvolunteer one’s self.

Even the Fediverse is open to data being pulled for training models.

All of this, combined with the persuasiveness of generative AI that has given psychology pause, has democracies concerned about the influence. A recent example is Grok, Twitter X’s AI for paid subscribers, fell victim to what was clearly satire and caused a panic – which should also have us wondering about how we view intelligence.

…The headline available to Grok subscribers on Monday read, “Sun’s Odd Behavior: Experts Baffled.” And it went on to explain that the sun had been, “behaving unusually, sparking widespread concern and confusion among the general public.”…

Elon Musk’s Grok Creates Bizarre Fake News About the Solar Eclipse Thanks to Jokes on X“, Matt Novak, Gizmodo, 8 April 2024

Of course, some levity is involved in that one whereas Grok posting that Iran had struck Tel Aviv (Israel) with missiles seems dangerous, particularly when posted to the front page of Twitter X. It shows the dangers of fake news with AI, deepening concerns related to social media and AI and should be making us ask the question about why billionaires involved in artificial intelligence wield the influence that they do. How much of that is generated? We have an idea how much it is lobbied for.

Meanwhile, Facebook has been spamming users and has been restricting accounts without demonstrating a cause. If there were a video tape in a Blockbuster on this, it would be titled, “Algorithms Gone Wild!”.

Journalism is also impacted by AI, though real journalists tend to be rigorous in their sources. Real newsrooms have rules, and while we don’t have that much insight into how AI is being used in newsrooms, it stands to reason that if a newsroom is to be a trusted source, they will go out of their way to make sure that they are: They have a vested interest in getting things right. This has not stopped some websites parading as trusted sources disseminating untrustworthy information because, even in Web 2.0 when the world had an opportunity to discuss such things at the World Summit on Information Society, the country with the largest web presence did not participate much, if at all, at a government level.

Then we have the thing that concerns the most people: their lives. Jon Stewart even did a Daily Show on it, which is worth watching, because people are worried about generative AI taking their jobs with good reason. Even as the Davids of AI3 square off for your market-share, layoffs have been happening in tech as they reposition for AI.

Meanwhile, AI is also apparently being used as a cover for some outsourcing:

Your automated cashier isn’t an AI, just someone in India. Amazon made headlines this week for rolling back its “Just Walk Out” checkout system, where customers could simply grab their in-store purchases and leave while a “generative AI” tallied up their receipt. As reported by The Information, however, the system wasn’t as automated as it seemed. Amazon merely relied on Indian workers reviewing store surveillance camera footage to produce an itemized list of purchases. Instead of saving money on cashiers or training better systems, costs escalated and the promise of a fully technical solution was even further away…

Don’t Be Fooled: Much “AI” is Just Outsourcing, Redux“, Janet Vertesi, TechPolicy.com, Apr 4, 2024

Maybe AI is creating jobs in India by proxy. It’s easy to blame problems on AI, too, which is a larger problem because the world often looks for something to blame and having an automated scapegoat certainly muddies the waters.

And the waters of The Big Picture of AI are muddied indeed – perhaps partly by design. After all, those involved are making money, they have now even better tools to influence markets, populations, and you.

In a world that seems to be running a deficit when it comes to trust, the tools we’re creating seem to be increasing rather than decreasing that deficit at an exponential pace.

  1. The full article at the New York Times is worth expending one of your free articles, if you’re not a subscriber. It gets into a lot of specifics, and is really a treasure chest of a snapshot of what companies such as Google, Meta and OpenAI have been up to and have released as plans so far. ↩︎
  2. That’s not just a metaphor, as the Israeli use of Lavender (AI) has been outed recently. ↩︎
  3. Not the Goliaths. David was the one with newer technology: The sling. ↩︎

Connecting WordPress.com Websites to Mastodon: The Good, The Bad, The Ugly.

Yesterday, I found that I could connect KnowProSE.com and RealityFragments.com to the Fediverse through Mastodon and decided to give it a try.

WordPress.com has a good article on connecting WordPress.com sites to the Fediverse, so there’s no need to rewrite that. What I noticed, however, is what everyone should be aware of.

I may actually disconnect the sites from the Fediverse in the near future because of what I write below, but if you are interested the links to the sites on Mastodon are:

That said, I’ll tell you why I’m not too pleased with these connections.

The Good

Clearly, having another outlet where posts are shared is always a good thing, and I actually had a good conversation related to something I posted because of it – these are good things. It creates hashtags from the tags created on your website.

Yet were they good enough? Is that enough?

The Bad

As it happens, these are automated accounts that the user cannot apparently log into on Mastodon. Because of that, interacting with users on Mastodon is not really something you can do. It automagically posts what you post on a WordPress.com site to the Fediverse, but it doesn’t handle the most important part of any part of social networks: Interaction.

I had hoped that the conversations would somehow connect to the comments on posts. That doesn’t happen. Also, because it posts what the title and an excerpt, it doesn’t have hashtags, which is how the Fediverse users find content.[corrected]

Because Mastodon doesn’t have functionality to retransmit with commentary, there’s just no getting around that.

The Ugly

Search engines aren’t big on the Fediverse yet, and that’s largely because it is by nature decentralized. Thus, it doesn’t really help search engine ranking, it doesn’t help people find your content through hashtags (the Bad), and it has a level of interactivity that is depressing enough to consider not doing it at all.

Takeaway

I am presently not impressed with this offering for the reasons above, but, I also know that sometimes time is a powerful factor. Things change, things are seen in a new light, etc.

For now, I’ll leave them up as they are and see what happens. I think I’ll give it about a month. Thus, if you read this article in May and the links to the Fediverse no longer work, you’ll know that I deemed them a waste of space.