The Future of Social Media: Why Decentralizing and the Fediverse Matter Now More Than Ever

There was a time before social media and social networks as we know them, where people would talk to each other in person, isolated by geography. Then we figured out how to send our writing, and there was a period when pen-pals and postcards were important. News organizations adopted technology faster as reports came in from an ever increasing geography until, finally, we ran out of geography.

Social media has become an integral part of our lives, connecting us to friends, families, communities, and global events in ways that were unimaginable just a few decades ago. Yet, as platforms like Facebook, Instagram, and Twitter (now X) dominate our digital landscape, serious questions arise about privacy, control, and freedom. Who owns our data? How are algorithms shaping our perceptions? Are we truly free in these spaces? Are we instead slaves to the algorithms?

It’s time to rethink social media. Enter decentralization and the Fediverse—a revolutionary approach to online networking that prioritizes freedom, community, and individual ownership.

The Problem with Centralized Social Media And Networks

At their core, mainstream social media platforms operate on a centralized model. They are controlled by corporations with one primary goal: profit. This model creates several challenges:

  1. Privacy Violations: Your data – likes, shares, private messages—becomes a commodity, sold to advertisers and third parties.
  2. Algorithmic Control: Centralized platforms decide what you see, often prioritizing sensational or divisive content to keep you engaged longer.
  3. Censorship: Content moderation decisions are made by corporations, leading to debates about free speech and fair enforcement of rules.
  4. Monopolization: A handful of companies dominate the space, stifling innovation and giving users little choice.

All of this comes to the fore with the recent issues in the United States surrounding Tik-Tok, which Jon Oliver recently mentioned on his show and which I mentioned here on KnowProSE.com prior. The same reasons that they want to ban TikTok are largely the same things other social networks already do – it’s just who they do it for or could potentially do it for. Yes, they are as guilty as any other social network of the same problems above.

These are real issues, too, related to who owns what regarding… you. They often leave you looking at the same kind of content and drag you down a rabbit hole while simply supporting your biases, and should you step out of line, you might find your reach limited or in some cases completely taken away. These issues have left many users feeling trapped, frustrated, and disillusioned.

Recently, there has been a reported mass exodus from one controlled network to another – from Twitter to BlueSky.

There’s a better way.

What Is the Fediverse?

The Fediverse (short for “federated universe”) is a network of interconnected, decentralized platforms that communicate using open standards. Unlike traditional social media, the Fediverse is not controlled by a single entity. Instead, it consists of independently operated servers—called “instances”—that can interact with each other.

Popular platforms within the Fediverse include:

  • Mastodon: A decentralized alternative to Twitter.
  • Pixelfed: An Instagram-like platform for sharing photos.
  • Peertube: A decentralized video-sharing platform.
  • WriteFreely: A blogging platform with a focus on minimalism and privacy.

These platforms empower users by giving them control over their data, their communities, and their online experiences.


Why Decentralization Matters

  1. Data Ownership: In the Fediverse, your data stays under your control. Each server is independently operated, and many prioritize privacy and transparency.
  2. Freedom of Choice: You can choose or create a server that aligns with your values. If you don’t like one instance, you can switch to another without losing your connections.
  3. Resilience Against Censorship: No single entity has the power to shut down the entire network.
  4. Community-Centric: Instead of being shaped by algorithms, communities in the Fediverse are human-driven and often self-moderated.

How You Can Join the Movement

  1. Explore Fediverse Platforms: Start by creating an account on Mastodon or another Fediverse platform. Many websites like joinmastodon.org can help you find the right instance.
  2. Support Decentralization: Advocate for open standards and decentralized technologies in your circles.
  3. Educate Others: Share the benefits of decentralization with your friends and family. Help them see that alternatives exist.
  4. Contribute to the Ecosystem: If you’re tech-savvy, consider hosting your own instance or contributing to open-source projects within the Fediverse.

The Call to Action

Social media doesn’t have to be controlled by a handful of tech giants. The Fediverse represents a vision for a better internet—one that values privacy, freedom, and genuine community. By choosing decentralized platforms, you’re taking a stand for a more equitable digital future.

So, what are you waiting for? Explore the Fediverse, join the conversation, and help build a social media landscape that works for everyone, not just the corporations.

Take the first step today. Decentralize your social media life and reclaim your digital freedom!

joinmastodon.org

Echo Chambers, Ant Mills and Social Networks.

There’s been an ant mill, sometimes called a death spiral, that had gone viral on social media some time ago. It’s a real thing.

Army ants follow by pheromones, and if they get separated from the main group, they can end up following each other in circles until they either find a path out or die of exhaustion.

It’s mesmerizing to watch. It’s almost like they’re being held in orbit by something like gravity, but that’s not the case. They’re just lost ants, going in circles.

I’ve seen enough things on the Internet come and go to see a sort of commonality.

It’s actually a pretty good metaphor for echo chambers in social media. Social media isn’t just singular echo chambers, the echo chambers are based on attributes.

If you like, as an example, dogs, you can get into an echo chamber of dog memes. If you also happen to like cats – it is possible to like both – you can get into an echo chamber of cat memes. These are pretty benign echo chambers, but when you start seeing the same memes over and over, you can be pretty sure that echo chamber is in a spiral. You lose interest. You leave, finding a different ‘pheromone trail’ to follow or just… taking a hard right when everyone is going left.

With centralized social networks such as Facebook or Twitter X, algorithms feed these echo chambers that connect people. When those echo chambers become stale, the connections with others within the echo chamber remain and before you know it you have the equivalent of an ant mill of humans in a social network. To stay in that echo chamber, critical thinking is ignored and confirmation biases are fed.

This also accelerates when the people who provide content to the echo chambers – the pheromones, if you will – leave. Some might follow them elsewhere, but the inertia keeps many there until… well, until they’re exhausted.

This seems a logical conclusion to the algorithmic display of content, or promoting certain posts all the time in search results.

Do you find yourself using the same apps, doing the same things over and over with diminishing returns of happiness (assuming there is happiness in your echo chamber)? Does it seem like you’ve seen that meme before?

You might be in a spiral.

Get out. Don’t die sniffing someone else’s posterior.

From Inputs to The Big Picture: An AI Roundup

This started off as a baseline post regarding generative artificial intelligence and it’s aspects and grew fairly long because even as I was writing it, information was coming out. It’s my intention to do a ’roundup’ like this highlighting different focuses as needed. Every bit of it is connected, but in social media postings things tend to be written of in silos. I’m attempting to integrate since the larger implications are hidden in these details, and will try to stay on top of it as things progress.

It’s long enough where it could have been several posts, but I wanted it all together at least once.

No AI was used in the writing, though some images have been generated by AI.

The two versions of artificial intelligence on the table right now – the marketed and the reality – have various problems that make it seem like we’re wrestling a mating orgy of cephalopods.

The marketing aspect is a constant distraction, feeding us what helps with stock prices and good will toward those implementing the generative AIs, while the real aspect of these generative AIs is not really being addressed in a cohesive way.

To simplify this, this post breaks it down into the Input, the Output, and the impacts on the ecosystem the generative AIs work in.

The Input.

There’s a lot that goes into these systems other than money and water. There’s the information used for the learning models, the hardware needed, and the algorithms used.

The Training Data.

The focus so far has been on what goes into their training data, and that has been an issue including lawsuits, and less obviously, trust of the involved companies.

…The race to lead A.I. has become a desperate hunt for the digital data needed to advance the technology. To obtain that data, tech companies including OpenAI, Google and Meta have cut corners, ignored corporate policies and debated bending the law, according to an examination by The New York Times…

How Tech Giants Cut Corners to Harvest Data for A.I.“, Cade MetzCecilia KangSheera FrenkelStuart A. Thompson and Nico Grant, New York Times, April 6, 2024 1

Of note, too, is that Google has been indexing AI generated books, which is what is called ‘synthetic data’ and has been warned against, but is something that companies are planning for or even doing already, consciously and unconsciously.

Where some of these actions are questionably legal, they’re not as questionably ethical to some, thus the revolt mentioned last year against AI companies using content without permission. It’s of questionable effect because no one seems to have insight into what the training data consists of, and there seems no one is auditing them.

There’s a need for that audit, if only to allow for trust.

…Industry and audit leaders must break from the pack and embrace the emerging skills needed for AI oversight. Those that fail to address AI’s cascading advancements, flaws, and complexities of design will likely find their organizations facing legal, regulatory, and investor scrutiny for a failure to anticipate and address advanced data-driven controls and guidelines.

Auditing AI: The emerging battlefield of transparency and assessment“, Mark Dangelo, Thomson Reuters, 25 Oct 2023.

While everyone is hunting down data, no one seems to be seriously working on oversight and audits, at least in a public way, though the United States is pushing for global regulations on artificial intelligence at the UN. The status of that hasn’t seemed to have been updated, even as artificial intelligence is being used to select targets in at least 2 wars right now (Ukraine and Gaza).

There’s an imbalance here that needs to be addressed. It would be sensible to have external auditing of learning data models and the sources, as well as the algorithms involved – and just get get a little ahead, also for the output. Of course, these sorts of things should be done with trading on stock markets as well, though that doesn’t seem to have made as much headway in all the time that has been happening either.

Some websites are trying to block AI crawlers, and it is an ongoing process. Blocking them requires knowing who they are and doesn’t guarantee bad actors might not stop by.

There is a new Bill that being pressed in the United States, the Generative AI Copyright Disclosure Act, that is worth keeping an eye on:

“…The California Democratic congressman Adam Schiff introduced the bill, the Generative AI Copyright Disclosure Act, which would require that AI companies submit any copyrighted works in their training datasets to the Register of Copyrights before releasing new generative AI systems, which create text, images, music or video in response to users’ prompts. The bill would need companies to file such documents at least 30 days before publicly debuting their AI tools, or face a financial penalty. Such datasets encompass billions of lines of text and images or millions of hours of music and movies…”

New bill would force AI companies to reveal use of copyrighted art“, Nick Robins-Early, TheGuardian.com, April 9th, 2024.

Given how much information is used by these companies already from Web 2.0 forward, through social media websites such as Facebook and Instagram (Meta), Twitter, and even search engines and advertising tracking, it’s pretty obvious that this would be in the training data as well.

The Algorithms.

The algorithms for generative AI are pretty much trade secrets at this point, but one has to wonder at why so much data is needed to feed the training models when better algorithms could require less. Consider a well read person could answer some questions, even as a layperson, with less of a carbon footprint. We have no insight into the algorithms either, which makes it seem as though these companies are simply throwing more hardware and data at the problem than being more efficient with the data and hardware that they already took.

There’s not much news about that, and it’s unlikely that we’ll see any. It does seem like fuzzy logic is playing a role, but it’s difficult to say to what extent, and given the nature of fuzzy logic, it’s hard to say whether it’s implementation is as good as it should be.

The Hardware

Generative AI has brought about an AI chip race between Microsoft, Meta, Google, and Nvidia, which definitely leaves smaller companies that can’t afford to compete in that arena at a disadvantage so great that it could be seen as impossible, at least at present.

The future holds quantum computing, which could make all of the present efforts obsolete, but no one seems interested in waiting around for that to happen. Instead, it’s full speed ahead with NVIDIA presently dominating the market for hardware for these AI companies.

The Output.

One of the larger topics that has seemed to have faded is regarding what was called by some as ‘hallucinations’ by generative AI. Strategic deception was also something that was very prominent for a short period.

There is criticism that the algorithms are making the spread of false information faster, and the US Department of Justice is stepping up efforts to go after the misuse of generative AI. This is dangerous ground, since algorithms are being sent out to hunt products of other algorithms, and the crossfire between doesn’t care too much about civilians.2

The impact on education, as students use generative AI, education itself has been disrupted. It is being portrayed as an overall good, which may simply be an acceptance that it’s not going away. It’s interesting to consider that the AI companies have taken more content than students could possibly get or afford in the educational system, which is something worth exploring.

Given that ChatGPT is presently 82% more persuasive than humans, likely because it has been trained on persuasive works (Input; Training Data), and since most content on the internet is marketing either products, services or ideas, that was predictable. While it’s hard to say how much content being put into training data feeds on our confirmation biases, it’s fair to say that at least some of it is. Then there are the other biases that the training data inherits through omission or selective writing of history.

There are a lot of problems, clearly, and much of it can be traced back to the training data, which even on a good day is as imperfect as our own imperfections, it can magnify, distort, or even be consciously influenced by good or bad actors.

And that’s what leads us to the Big Picture.

The Big Picture

…For the past year, a political fight has been raging around the world, mostly in the shadows, over how — and whether — to control AI. This new digital Great Game is a long way from over. Whoever wins will cement their dominance over Western rules for an era-defining technology. Once these rules are set, they will be almost impossible to rewrite…

Inside the shadowy global battle to tame the world’s most dangerous technology“, Mark Scott, Gian Volpicelli, Mohar Chatterjee, Vincent Manancourt, Clothilde Goujard and Brendan Bordelon, Politico.com, March 26th, 2024

What most people don’t realize is that the ‘game’ includes social media and the information it provides for training models, such as what is happening with TikTok in the United States now. There is a deeper battle, and just perusing content on social networks gives data to those building training models. Even WordPress.com, where this site is presently hosted, is selling data, though there is a way to unvolunteer one’s self.

Even the Fediverse is open to data being pulled for training models.

All of this, combined with the persuasiveness of generative AI that has given psychology pause, has democracies concerned about the influence. A recent example is Grok, Twitter X’s AI for paid subscribers, fell victim to what was clearly satire and caused a panic – which should also have us wondering about how we view intelligence.

…The headline available to Grok subscribers on Monday read, “Sun’s Odd Behavior: Experts Baffled.” And it went on to explain that the sun had been, “behaving unusually, sparking widespread concern and confusion among the general public.”…

Elon Musk’s Grok Creates Bizarre Fake News About the Solar Eclipse Thanks to Jokes on X“, Matt Novak, Gizmodo, 8 April 2024

Of course, some levity is involved in that one whereas Grok posting that Iran had struck Tel Aviv (Israel) with missiles seems dangerous, particularly when posted to the front page of Twitter X. It shows the dangers of fake news with AI, deepening concerns related to social media and AI and should be making us ask the question about why billionaires involved in artificial intelligence wield the influence that they do. How much of that is generated? We have an idea how much it is lobbied for.

Meanwhile, Facebook has been spamming users and has been restricting accounts without demonstrating a cause. If there were a video tape in a Blockbuster on this, it would be titled, “Algorithms Gone Wild!”.

Journalism is also impacted by AI, though real journalists tend to be rigorous in their sources. Real newsrooms have rules, and while we don’t have that much insight into how AI is being used in newsrooms, it stands to reason that if a newsroom is to be a trusted source, they will go out of their way to make sure that they are: They have a vested interest in getting things right. This has not stopped some websites parading as trusted sources disseminating untrustworthy information because, even in Web 2.0 when the world had an opportunity to discuss such things at the World Summit on Information Society, the country with the largest web presence did not participate much, if at all, at a government level.

Then we have the thing that concerns the most people: their lives. Jon Stewart even did a Daily Show on it, which is worth watching, because people are worried about generative AI taking their jobs with good reason. Even as the Davids of AI3 square off for your market-share, layoffs have been happening in tech as they reposition for AI.

Meanwhile, AI is also apparently being used as a cover for some outsourcing:

Your automated cashier isn’t an AI, just someone in India. Amazon made headlines this week for rolling back its “Just Walk Out” checkout system, where customers could simply grab their in-store purchases and leave while a “generative AI” tallied up their receipt. As reported by The Information, however, the system wasn’t as automated as it seemed. Amazon merely relied on Indian workers reviewing store surveillance camera footage to produce an itemized list of purchases. Instead of saving money on cashiers or training better systems, costs escalated and the promise of a fully technical solution was even further away…

Don’t Be Fooled: Much “AI” is Just Outsourcing, Redux“, Janet Vertesi, TechPolicy.com, Apr 4, 2024

Maybe AI is creating jobs in India by proxy. It’s easy to blame problems on AI, too, which is a larger problem because the world often looks for something to blame and having an automated scapegoat certainly muddies the waters.

And the waters of The Big Picture of AI are muddied indeed – perhaps partly by design. After all, those involved are making money, they have now even better tools to influence markets, populations, and you.

In a world that seems to be running a deficit when it comes to trust, the tools we’re creating seem to be increasing rather than decreasing that deficit at an exponential pace.

  1. The full article at the New York Times is worth expending one of your free articles, if you’re not a subscriber. It gets into a lot of specifics, and is really a treasure chest of a snapshot of what companies such as Google, Meta and OpenAI have been up to and have released as plans so far. ↩︎
  2. That’s not just a metaphor, as the Israeli use of Lavender (AI) has been outed recently. ↩︎
  3. Not the Goliaths. David was the one with newer technology: The sling. ↩︎

Connecting WordPress.com Websites to Mastodon: The Good, The Bad, The Ugly.

Yesterday, I found that I could connect KnowProSE.com and RealityFragments.com to the Fediverse through Mastodon and decided to give it a try.

WordPress.com has a good article on connecting WordPress.com sites to the Fediverse, so there’s no need to rewrite that. What I noticed, however, is what everyone should be aware of.

I may actually disconnect the sites from the Fediverse in the near future because of what I write below, but if you are interested the links to the sites on Mastodon are:

That said, I’ll tell you why I’m not too pleased with these connections.

The Good

Clearly, having another outlet where posts are shared is always a good thing, and I actually had a good conversation related to something I posted because of it – these are good things. It creates hashtags from the tags created on your website.

Yet were they good enough? Is that enough?

The Bad

As it happens, these are automated accounts that the user cannot apparently log into on Mastodon. Because of that, interacting with users on Mastodon is not really something you can do. It automagically posts what you post on a WordPress.com site to the Fediverse, but it doesn’t handle the most important part of any part of social networks: Interaction.

I had hoped that the conversations would somehow connect to the comments on posts. That doesn’t happen. Also, because it posts what the title and an excerpt, it doesn’t have hashtags, which is how the Fediverse users find content.[corrected]

Because Mastodon doesn’t have functionality to retransmit with commentary, there’s just no getting around that.

The Ugly

Search engines aren’t big on the Fediverse yet, and that’s largely because it is by nature decentralized. Thus, it doesn’t really help search engine ranking, it doesn’t help people find your content through hashtags (the Bad), and it has a level of interactivity that is depressing enough to consider not doing it at all.

Takeaway

I am presently not impressed with this offering for the reasons above, but, I also know that sometimes time is a powerful factor. Things change, things are seen in a new light, etc.

For now, I’ll leave them up as they are and see what happens. I think I’ll give it about a month. Thus, if you read this article in May and the links to the Fediverse no longer work, you’ll know that I deemed them a waste of space.

Week One of Mastodon.

I’ve been on Mastodon a week now and thought I should write a little bit about the experience.

There’s not much to write about. It works. There are interesting people to follow, I’m confident that my data isn’t being collected, and my feed is always interesting because someone else’s algorithm isn’t controlling what I see.

It also ends up that when I wrote that when I attempted to use Mastodon it was ‘like trying to shag an unwilling octopus’, it had a lot to do with the people who landed there from my elder networks and didn’t really explain anything – leaving me wondering about which server to join, whether I needed to build my own server, etc.

It’s actually quite easy. It doesn’t really matter which server you’re on – I’m on social.mastodon – because they all connect through the Fediverse, which is to say that they are decentralized.

Relative to other social networks.

That last part is so important to me. When I was active on Facebook, I saw a very large decline over the years of quality content that I wanted to see. This was underlined by the latest discovery that Facebook is spamming users.

Twitter, or if you’re a Musk-bro, ‘X’, is much the same thing. What’s hilarious is that both of those social networks are trying to train their generative AIs and have the worst platforms because of AI and algorithms. Web 2.0 meets AI, chaos ensues.

LinkedIn deserves mention here since so many people use it, but… as far as professional networking, I don’t think it counts as much as building real connections outside of the leering eyes of Microsoft, and being asked to help write articles for them which I’m sure will be used to train their AI just so I can have a cool title. Nope, no thanks. Hit me in the wallet.

Pros and Cons.

I have yet to have a negative experience with anyone on Mastodon. In fact, when you respond to someone’s post for the first time, I get prompted to basically be courteous, and so I expect other people are as well.

I do miss being able to comment on something I retransmit – in Mastodon speak, that’s boosting. I’m not sure why that is, but I’ve found it’s not something I actually need.

The only thing that Mastodon lacks so far are connections with some family and friends who haven’t moved to Mastodon. That’s simply a factor of inertia, much like in the 1990s many people thought ‘The Internet’ was AOL, which Facebook has mimicked pretty well.

In all, I’m finding Mastodon worthwhile, and much less twitchy than the other social networks, largely because I’m not seeing crap I don’t want to see.

If I have a quiet mind to do other things and a social network is in the background, I consider that a win. Mastodon is a win.

Facebook’s Algorithms Spamming Users.

If you haven’t left Facebook yet, as I have, you’ve probably noticed a lot of AI spam. I did when I was there and blocked a bunch of it (it was hard to keep up with).

Well, it isn’t just you.

“…What is happening, simply, is that hundreds of AI-generated spam pages are posting dozens of times a day and are being rewarded by Facebook’s recommendation algorithm. Because AI-generated spam works, increasingly outlandish things are going viral and are then being recommended to the people who interact with them. Some of the pages which originally seemed to have no purpose other than to amass a large number of followers have since pivoted to driving traffic to webpages that are uniformly littered with ads and themselves are sometimes AI-generated, or to sites that are selling cheap products or outright scams. Some of the pages have also started buying Facebook ads featuring Jesus or telling people to like the page “If you Respect US Army.”…”

Facebook’s Algorithm Is Boosting AI Spam That Links to AI-Generated, Ad-Laden Click Farms“, Jason Koebler, 404 Media, March 19, 2024

So not only are the algorithms arbitrarily restricting user accounts, as they did mine, but they’re feeding people with spam to an extent that it wasn’t just noticeable to an individual.

Meanwhile, Facebook has been buying GPUs to develop ‘next level’ AI, when in fact their algorithms are about as gullible as their GPU purchases are numerous.

Glad I left that platform.

The Battle For Your Habits.

Found floating around today in the wild. As an atheist that doesn’t use Chrome, I know he ain’t talking to me.

There are some funny memes going around about TikTok and… Chinease spyware, or what have you. The New York Times had a few articles on TikTok last week that were interesting and yet… missed a key point that the memes do not.

Being afraid of Chinese Spyware while so many companies have been spying on their customers seems more of a bias than anything.

Certainly, India got rid of TikTok and has done better for it. Personally, I don’t like giving my information to anyone if I can help it, but these days it can’t be helped. Why is TikTok an issue in the United States?

It’s not too hard to speculate that it’s about lobbyism of American tech companies who lost the magic sauce for this generation. It’s also not hard to consider India’s reasoning about China being able to push their own agenda, particularly with violence on their shared borders.

Yet lobbying from the American tech companies is most likely, because they want your data and don’t want you to give it to China. They want to be able to sell you stuff based on what you’ve viewed, liked, posted, etc. So really, it’s not even about us.

It’s about the data that we give away daily when browsing social networks of any sort, websites, or even when you think you’re being anonymous using Google Chrome when in fact you’re still being tracked. The people who are advocating banning TikTok aren’t holding anyone else’s feet to the fire, instead using the ‘they will do stuff with your information’ when in fact we’ve had a lot of bad stuff happen with our information over the years.

Found circulating as a meme, which lead me to check out StoneToss.com – some really great work there.

Since 9/11, in particular, the US government has taken a pretty big interest in electronic trails, all in the interest in National Security, with the FBI showing up after the Boston Marathon bombing just because people were looking at pressure cookers.

All of this information will get possibly get poured into learning models for artificial intelligences, too. Even WordPress.com volunteered people’s blogs rather than asked for volunteers.

What value do you get for that? They say you get better advertising, which is something that I boggle at. Have you ever heard anyone wish that they could see better advertising rather than less advertising?

They say you get the stuff you didn’t even know you wanted, and to a degree, that might be true, but the ability to just go browse things has become a lost art. Just about everything you see on the flat screen you’re looking at is because of an algorithm deciding for you what you should see. Thank you for visiting, I didn’t do that!

Even that system gets gamed. This past week I got a ‘account restriction’ from Facebook for reasons that were not explained other than instructions to go read the community standards because algorithms are deciding based on behaviors that Facebook can’t seem to explain. Things really took off with that during Covid, where even people I knew were spreading some wrong information because they didn’t know better and, sometimes, willfully didn’t want to know better or understand their own posts in a broader context.

Am I worried about TikTok? Nope. I don’t use it. If you do use TikTok, you should. But you should worry if you use any social network. It’s not as much about who is selling and reselling information about you as much as what they can do with it to control what you see.

Of course, most people on those platforms don’t see them for what they are, instead taking things at face value and not understanding the implications it has on choices they will have in the future that could range from advertising to content that one views.

China’s not our only problem.

The Supreme Court, Your Social Network, and AI

One of the ongoing issues that people maybe haven’t paid as much attention to is related to the United States Supreme Court and social networks.

That this has a larger impact than just within the United States takes a little bit of understanding. Still, we’ll start in the United States and what started the ball rolling.

“A majority of the Supreme Court seemed wary on Monday of a bid by two Republican-led states to limit the Biden administration’s interactions with social media companies, with several justices questioning the states’ legal theories and factual assertions.

Most of the justices appeared convinced that government officials should be able to try to persuade private companies, whether news organizations or tech platforms, not to publish information so long as the requests are not backed by coercive threats….”

Supreme Court Wary of States’ Bid to Limit Federal Contact With Social Media Companies“, Adam Liptak, New York Times, March 18, 2024

This deals with the last United States Presidential Election, and we’re in an election year. It also had a lot to do with the response to Covid-19 and a lot of false information that was spread, and even there we see arguments about about whether the government should be the only one spreading false information.

Now I’ll connect this to the rest of the planet. Social networks, aside from the 800lb Chinese Gorilla (TikTok) are mainly in the United States. Facebook. The Social Network formerly known as Twitter. So the servers all fall under US jurisdiction.

Let’s pull that 800 lb Chinese Gorilla back in the ring too, where that political issue of TikTok is at odds with who is collecting data from who, since the Great Firewall of China keeps China in China but lets the data from the world go to their government.

Why is that data important? Because it’s being used to train Artificial Intelligences. It’s about who trains their artificial intelligence’s faster, really.

Knock the dust off this old tune.

Even WordPress.com, where this site is presently hosted, got into the game by volunteering it’s customers before telling them how not to volunteer.

The Supreme Court is supposed to have the last say on all matter of things, and because of that there’s a level of ethics assumed of the members – which John Oliver dragged under a spotlight. Let’s just say: there are questions.

It’s also worth noting that in 2010, the U.S. Supreme Court decided that money was free speech. This means, since technology companies lobby and support politicians, the social networks you use have more free speech than the users combined based on their income alone – not to mention their ability to choose what you see, what you can say, and who you can say it to by algorithms that they can’t seem to master themselves. In a way that’s heartening, in a way it’s sickening.

So, the Supreme Court ruling on issues of whether the United States government’s interference in social networks is also about who collects the data, and what sort of information will be used to train artificial intelligences of the present and future.

The dots are all there, but it seems like people don’t really understand that this isn’t as much a fight for individual freedom of speech as it is about deciding what future generations will be told.

Even more disturbing now is just how much content is AI generated on the Internet, which has already been noted to be a significant amount, and is estimated to be 90% by some experts by 2026.

So who should control what you can post? Should governments decide? Should technology companies?

These days, few trust either. It seems like we need oversight on both, which will never happen on a planet where everybody wants to rule the world. Please fasten your seat-belts.

Social Networks, Privacy, Revenue and AI.

I’ve seen more and more people leaving Facebook because their content just isn’t getting into timelines. It’s an interesting thing to consider the possibilities of. While some of the complaints about the Facebook algorithms are fun to read, it doesn’t really mean too much to write those sort of complaints. It’s not as if Facebook is going to change it’s algorithms over complaints.

As I pointed out to people, people using Facebook aren’t the customers. People using Twitter-X aren’t the customers either. To be a customer, you have to buy something. Who buys things on social networks? Advertisers are one, of course.

That’s something Elon Musk didn’t quite get the memo on. Why would he be this confidence? Hubris? Maybe, that always seems a factor, but it’s probably something more sensible.

Billionaires used to be much better spoken, it seems.

There’s something pretty valuable in social networks that people don’t see. It’s the user data, which is strangely what the canceled West World was about. The real value is in being able to predict what people want and influence outcomes, much as the television series showed after the first season.1

Many people seem to think that privacy is only about credit card information and personal details. It also includes choices that allow algorithms to predict choices. Humans are black boxes in this regard, and if you have enough computing power you can go around poking and prodding to see the results.

Have you noticed that these social networks are linked somehow to AI initiatives? Through Meta, Facebook is linked to AI initiatives of Meta. Musk, chief twit at X, has his fingers in the AI pie too.

Artificial intelligences need learning models, and if you own a social network, you not only get to poke and prod – you have the potential to influence. Are your future choices something that fall under privacy? Probably not – but your past choices probably should be because that’s how you get to predicting and influencing future choices.

I never really got into Twitter. Facebook was less interruptive. On the surface, these started off as content management systems that provided a service and had paid advertising to support it, yet now one has to wonder at the value of the user data. Back in 2018, Cambridge Analytics harvested data from 50 million Facebook users. Zuckerberg later apologized, and talked about how 3rd party apps would be limited. To his credit, I think it was handled pretty well.

Still, it also signaled how powerful and useful that data could be and if you own a social network, that would at least give you pause. After all, Cambridge Analytics influenced politics at the least, and that could have also influenced markets. The butterfly effect reins supreme in the age of big data and artificial intelligence.

This is why privacy is important in the age of artificial intelligence learning models, algorithms, and so forth. It can impact the responses one gets from any large language model, which is why there are pretty serious questions regarding privacy, copyright, and other things related to training them. Bias leaks into everything, and popular bias on social networks is simply about the most vocal and repetitive – not about what is actually correct. This is also why canceling as a culture phenomenon can also be so damaging. It’s a nuclear option in the world of information, and oddly, large groups of smart or stupid people can use it with impunity.

This is why we see large language models hedge on some questions presently, because of conflicts within the learning model as well as some well designed algorithms. In that we should be a little grateful.

We should probably lobbying to find out what is in these learning models that artificial intelligences are given in much the same way we used2 to grill people who would represent us collectively. Sure, Elon Musk might be taking a financial hit, but what if it’s a gambit to leverage user data for bigger returns later with his ethics embedded in how he gets his companies to do that?

You don’t have to like or dislike people to question them and how they use this data, but we should all be a bit concerned. Yes, artificial intelligence is pretty cool and interesting, but unleashed without question of the integrity of the information trained on is at the least foolish.

Be careful what you share, what you say, who you interact with and why. Quizzes that require access to your user profile are definitely questionable, as that information and information of people you are connected with quickly get folded into data creating a digital shadow of yourself, part of the larger crowd that can influence the now and the future.

  1. This is not to say it was canceled for this reason. I only recently watched it, and have yet to finish season 3, but it’s very compelling and topical content for the now. Great writing and acting. ↩︎
  2. We don’t seem to be that good at it grilling people these days, perhaps because of all of this and more. ↩︎