The Supreme Court, Your Social Network, and AI

One of the ongoing issues that people maybe haven’t paid as much attention to is related to the United States Supreme Court and social networks.

That this has a larger impact than just within the United States takes a little bit of understanding. Still, we’ll start in the United States and what started the ball rolling.

“A majority of the Supreme Court seemed wary on Monday of a bid by two Republican-led states to limit the Biden administration’s interactions with social media companies, with several justices questioning the states’ legal theories and factual assertions.

Most of the justices appeared convinced that government officials should be able to try to persuade private companies, whether news organizations or tech platforms, not to publish information so long as the requests are not backed by coercive threats….”

Supreme Court Wary of States’ Bid to Limit Federal Contact With Social Media Companies“, Adam Liptak, New York Times, March 18, 2024

This deals with the last United States Presidential Election, and we’re in an election year. It also had a lot to do with the response to Covid-19 and a lot of false information that was spread, and even there we see arguments about about whether the government should be the only one spreading false information.

Now I’ll connect this to the rest of the planet. Social networks, aside from the 800lb Chinese Gorilla (TikTok) are mainly in the United States. Facebook. The Social Network formerly known as Twitter. So the servers all fall under US jurisdiction.

Let’s pull that 800 lb Chinese Gorilla back in the ring too, where that political issue of TikTok is at odds with who is collecting data from who, since the Great Firewall of China keeps China in China but lets the data from the world go to their government.

Why is that data important? Because it’s being used to train Artificial Intelligences. It’s about who trains their artificial intelligence’s faster, really.

Knock the dust off this old tune.

Even WordPress.com, where this site is presently hosted, got into the game by volunteering it’s customers before telling them how not to volunteer.

The Supreme Court is supposed to have the last say on all matter of things, and because of that there’s a level of ethics assumed of the members – which John Oliver dragged under a spotlight. Let’s just say: there are questions.

It’s also worth noting that in 2010, the U.S. Supreme Court decided that money was free speech. This means, since technology companies lobby and support politicians, the social networks you use have more free speech than the users combined based on their income alone – not to mention their ability to choose what you see, what you can say, and who you can say it to by algorithms that they can’t seem to master themselves. In a way that’s heartening, in a way it’s sickening.

So, the Supreme Court ruling on issues of whether the United States government’s interference in social networks is also about who collects the data, and what sort of information will be used to train artificial intelligences of the present and future.

The dots are all there, but it seems like people don’t really understand that this isn’t as much a fight for individual freedom of speech as it is about deciding what future generations will be told.

Even more disturbing now is just how much content is AI generated on the Internet, which has already been noted to be a significant amount, and is estimated to be 90% by some experts by 2026.

So who should control what you can post? Should governments decide? Should technology companies?

These days, few trust either. It seems like we need oversight on both, which will never happen on a planet where everybody wants to rule the world. Please fasten your seat-belts.

When Is An Algorithm ‘Expressive’?

Yesterday, I was listening to the webinar on Privacy Law and the United States First Amendment when I heard that lawyers for social networks are claiming both that they have free speech as a network as a speaker, as well as claiming not to be the speaker and claiming they are simply are presenting content users have expressed under the Freedom of Speech. How the arguments were presented I don’t know, and despite showing up for the webinar I am not a lawyer1. The case before the Supreme Court was being discussed, but that’s not my focus here.

I’m exploring how it would be possible to claim that a company’s algorithms that impact how a user perceives information could be considered ‘free speech’. I began writing this post about that and it became long and unwieldy2, so instead I’ll write a bit about the broader impact of social networks and their algorithms and tie it back.

Algorithms Don’t Make You Obese or Diabetic.

If you say the word ‘algorithm’ around some people, their eyes immediately glaze over. It’s really not that complicated; a repeatable thing is basically an algorithm. A recipe when in use is an algorithm. Instructions from Ikea are algorithms. Both hopefully give you what you want, and if they don’t, they are ‘buggy’.

Let’s go with the legal definition of what an algorithm is1. Laws don’t work without definitions, and code doesn’t either.

Per Cornell’s Legal Information Institute, an algorithm is:

“An algorithm is a set of rules or a computational procedure that is typically used to solve a specific problem. In the case of Vidillion, Inc. v. Pixalate Inc. an algorithm is defined as “one or more process(es), set of rules, or methodology (including without limitation data points collected and used in connection with any such process, set of rules, or methodology) to be followed in calculations, data processing, data mining, pattern recognition, automated reasoning or other problem-solving operations, including those that transform an input into an output, especially by computer.” With the increasing automation of services, more and more decisions are being made by algorithms. Some examples are; criminal risk assessments, predictive policing, and facial recognition technology.”

By this definition and perhaps in it’s simplest form, adding two numbers is an algorithm, which also fits just about any technical definition out there. That’s not at issue.

What is at issue in the context of social networks is how algorithms impact what we view on a social networking website. We should all understand in the broad strokes that Facebook, Twitter, TikTok and their ilk are in the business of showing people what they want to see, and to do this they analyze what people view so that they can give people what they want.

Ice cream and brownies for breakfast, everyone!

Let’s agree every individual bit of content you see that you can act on, such as liking or re-transmitting, is a single item. Facebook sees you like ice cream, Facebook shows you posts of ice cream incessantly. Maybe you go out and eat ice cream all the time because of this and end up with obesity and diabetes. Would Facebook be guilty of making you obese and diabetic?

Fast food restaurants aren’t considered responsible for making people obese and diabetic. We have choices about where we eat, just as we have choices about what we do with our lives outside of a social network context. Further, almost all of these social networks give you options to not view content, from blocking to reporting to randomly deleting your posts and waving a finger at you for being naughty – without telling you how.

Timelines: It’s All A Story.

As I wrote elsewhere, we all choose our own social media adventures. Most of my social networks are pretty well tuned to feed me new things to learn every day, while doing a terrible job of providing me information on what all my connections are up to. It’s a real estate problem on social network sites, and not everyone can be in that timeline. Algorithms pick and choose, and if there are paid advertisements to give you free access, they need space too.

Think of it all as a personal newspaper. Everything you see is picked for you based on what the algorithms decide, and yet all of that information is competing to get into your eyeballs, maybe even your brain. Every story is shouting ‘pick me! pick me!’ with catchy titles, wonderful images, and maybe even some content – because everyone wants you to click to their website so you can hammer them with advertising.4

Yet when we step back from those individual stories, the social networking site is curating things in a chronological order. Let’s assume that what it thinks you like to see the most is at the top and it goes down in priority based on what the algorithms have learned about you.

Now think of each post as a page in a newspaper. What’s on the front page affects how you perceive everything in the newspaper. Unfortunately, because it’s all shoved into a prioritized list for you, you get things that are sometimes in a strange order, giving a weird context.

Sometimes you get stray things you’re not interested in because the algorithms have grouped you with others. Sometimes the priority of what you last wrote about will suddenly have posts related to it covering every page in that newspaper.

You might think you’re picking your own adventure through social media, but you’re not directly controlling it. You’re randomly hitting a black box to see what comes out in the hope that you might like it, and you might like the order that it comes in.

We’re all beta testers of social networks in that regard. They are constantly tweaking algorithms to try to do better, but doing better isn’t necessarily for you. It’s for them, and it’s also for training their artificial intelligences more than likely. It’s about as random as human interests are.

Developing Algorithms.

Having written software in various companies over the decades, I can tell you that if there’s a conscious choice to express something with them, to get people to think one way or the other (the point of ‘free speech’), it would have to be very coordinated.

Certain content would have to be weighted as is done with advertising. Random content churning through feeds would not fire things off with the social networking algorithms unless they manually chose to do so across users. That requires a lot of coordination, lots of meetings, and lots of testing.

It can be done. With advertising as an example, it has been done overtly. Another example is the last press against fake news, which has attempted to proactively check content with independent fact checkers.

Is that free speech? Is that freedom of expression of a company? If you look at this case again, you will likely draw your own conclusions. Legally, I have no opinion because I’m not a lawyer.

But as a software engineer, I look at it and wonder if this is a waste of the Court’s time.

  1. It should be in the interest of software engineers and others about the legal aspects of what we have worked on and will work on. Ethics are a thing. ↩︎
  2. It still is, and I apologize if it’s messy. This is a post I’ll likely have to revisit and edit. ↩︎
  3. Legal definitions of what an algorithm is might vary around the world. It might be worth searching for a legal definition where you are. ↩︎
  4. This site has advertising. It doesn’t really pay and I’m not going to shanghai viewers by misrepresenting what I write. It’s a choice. Yet to get paid for content, that’s what many websites do. If you are here, you’re appreciated. Thanks! ↩︎