I’ve been on Mastodon a week now and thought I should write a little bit about the experience.
There’s not much to write about. It works. There are interesting people to follow, I’m confident that my data isn’t being collected, and my feed is always interesting because someone else’s algorithm isn’t controlling what I see.
It also ends up that when I wrote that when I attempted to use Mastodon it was ‘like trying to shag an unwilling octopus’, it had a lot to do with the people who landed there from my elder networks and didn’t really explain anything – leaving me wondering about which server to join, whether I needed to build my own server, etc.
It’s actually quite easy. It doesn’t really matter which server you’re on – I’m on social.mastodon – because they all connect through the Fediverse, which is to say that they are decentralized.
Relative to other social networks.
That last part is so important to me. When I was active on Facebook, I saw a very large decline over the years of quality content that I wanted to see. This was underlined by the latest discovery that Facebook is spamming users.
Twitter, or if you’re a Musk-bro, ‘X’, is much the same thing. What’s hilarious is that both of those social networks are trying to train their generative AIs and have the worst platforms because of AI and algorithms. Web 2.0 meets AI, chaos ensues.
LinkedIn deserves mention here since so many people use it, but… as far as professional networking, I don’t think it counts as much as building real connections outside of the leering eyes of Microsoft, and being asked to help write articles for them which I’m sure will be used to train their AI just so I can have a cool title. Nope, no thanks. Hit me in the wallet.
Pros and Cons.
I have yet to have a negative experience with anyone on Mastodon. In fact, when you respond to someone’s post for the first time, I get prompted to basically be courteous, and so I expect other people are as well.
I do miss being able to comment on something I retransmit – in Mastodon speak, that’s boosting. I’m not sure why that is, but I’ve found it’s not something I actually need.
The only thing that Mastodon lacks so far are connections with some family and friends who haven’t moved to Mastodon. That’s simply a factor of inertia, much like in the 1990s many people thought ‘The Internet’ was AOL, which Facebook has mimicked pretty well.
In all, I’m finding Mastodon worthwhile, and much less twitchy than the other social networks, largely because I’m not seeing crap I don’t want to see.
If I have a quiet mind to do other things and a social network is in the background, I consider that a win. Mastodon is a win.
Of course, they don’t start that way. They generally are at least dressed as good intentions. I’ve had a place in many of them and still retain placeholders in the larger ones.
In general, the main problem with most of Web 2.0 has been that it has circled around advertising models based on impressions. This is the same model that works for spam: If you send a message out to 1 million people and only 0.01% take the bait, that’s 100 people. It’s about volume, and you can read up more about how much volume here.
It’s all about impressions, and algorithms that get impressions, be it via email spam or social networks.
It’s no wonder fake news has taken deep root in social networks, which act as echo chambers that users think are about their interests, but really are about getting the most impressions for advertising.
Now, it’s also about training artificial intelligences not just on user content, but also user interactions with each other and the platform – and whatever connects to the platform.
Facebook
My reasoning for recently leaving Facebook and cutting information going to Meta has some bearing on this. It started off simply enough for me almost 20 years ago; my then boss strongly suggested we get on Facebook since, as he put it, “it’s the future”.
It’s peculiar how people say, “it’s the future” without some qualifier of good or bad.
That the Facebook walled garden is now home of spamming AI generated spam through it’s algorithms has recently been outed: I had my suspicions as a user for some time. Worse, the arbitrary restrictions of user accounts has made the platform untenable for regular users, where sharing content from one group to another (cross-pollination) is apparently presently seen as a negative by the algorithms rather than a positive.
That it remains so used is a matter of inertia. When paid placement started on Facebook, it was an indicator that either paid placement was the de facto algorithm for user content or it would become it. The advertising you see isn’t necessarily for good products, it’s paid for by people who understand that if 100 people click out of 1 million and you get $1 from each one, you get $100. All you have to do is incessantly market a poor product and pay less than $100 in advertising: The standard Web 2.0 model.
In time, people will realize that they don’t need that platform to share information.
It’s no wonder that fake news became an issue on platforms, Twitter inclusive, because it was patently obvious that even accidentally less-than-trusted-sources could send messages that echoed across the Internet. When I did dip beyond my toes in supporting Ukraine on Twitter, I found a lot of propaganda (yes, both sides), a lot of hate, and even some racism. It was a cesspool of humanity’s most hateful things and from what I have observed in the feed when I do log in, I see it seems to have gotten worse.
People either praise or blame Elon Musk. He certainly hasn’t made it better, and his overt hypocrisy regarding free speech echoes across the Internet. Regardless, the platform was inherently flawed, particularly when it was time for it to start making money.
LinkedIn
LinkedIn started well enough as a place where people could post a more dynamic version of a resume. It was, at the time, a great idea. It also soon became a bit of a joke because when one was employed, one is connected to the people in the company and when you update your LinkedIn profile – you pretty much let the company you were presently working for that you’re shopping around. Therefore, it became pretty useless because it didn’t really afford that level of privacy one needs to shop employers.
It’s a harsh reality. Updating your LinkedIn profile could well have an employer looking for a way to get rid of you before you got rid of them. It works best if you’re unemployed, and as someone who has been in the professional arena for some decades, I can tell you that I never got a position through LinkedIn. All I got was loads of recruiter spam from people who obviously had not actually read my profile. Once, many years ago, I did some Java programming and never touched the language since. Up to last year I was getting recruiter spam about Java programming which required years of experience which… of course… I do not have. Little items like that through decades of software engineering became fodder for people claiming to be recruiters spamming me.
To their credit, they don’t seem to make money from advertising, but instead through selling ‘premium’ which I did try for a while. It wasn’t worth it to me, and I wouldn’t suggest it to anyone. Instead, I got positions through personal connections – real connections, registering with real headhunters, and even Craigslist for 2 software engineering positions. LinkedIn is just too easily gamed, and too easily a liability for employed people looking for a new position.
What they’re doing now, it would seem, is getting people to write articles on LinkedIn so that Microsoft can train it’s AI on them. It’s successful because people believe publishing on LinkedIn helps them find new positions when instead it helps AIs write better at no cost.
It didn’t take long for getting spam connections, considering I have a decent profile and a fair amount of connections. People wanting to sell me stuff, and even worse, the few who ask for sweat equity.
It’s just another walled garden that has become a litterbox.
There Are Others.
Instagram, TikTok, etc, all pretty much do the same thing at this point with different flavors of litter for the litterbox. If you find value in these walled gardens, that’s fair and you should do as you see fit.
There are an increasing amount of people who just feel stuck in them, and having invested so much time and energy into them. This is the ‘Sunk Cost Fallacy‘: the tendency to follow through on an something if we have already invested time, effort, or money into it, whether or not the current costs outweigh the benefits or value.
If you can use your time on something productive, do that instead. The walled gardens become prisons because of the Sunk Cost Fallacy, with no parole. The only way out is to break out.
The people who follow me on Facebook, as well as within the Reality Fragments Group, know about the troubles I’ve had with account restrictions. I had just gotten off one restriction for reasons never disclosed to me, when a few days later I got restricted again, and again, for no reasons given.
This reminded me of how I had to not post about things related to Ukraine, where a Ukrainian tractor pulling the Moscow was flagged as fake news when it was clearly satire. While I do support the Palestinian civilians, I hadn’t really posted anything in support of Palestine (I learned my lesson about Ukraine), so I don’t think the accusation that Meta is blocking pro-Palestinian content fits my particular situation1.
I looked over things and had no clue. Facebook dutifully told me something about the Community Standards and suggested I read it, but there was no reason given, no posts pointed to, nothing. Again. At least when I got put in ‘Facebook Jail’ for posting an image of a Ukrainian tractor pulling the Moscow, they gave me reasoning of it being fake news. Satire is not news, fake or otherwise.
So I thought to myself, “Why am I on Facebook?” I had reasons long ago. I even had reasons not-so-long-ago. But what are my reasons now? I did an inventory. Facebook gave me no reasons. Sure, you can talk about the value of the community, but Facebook algorithms don’t really let me see people’s posts as I would like, and I have too many friends, relatives and acquaintances to go to each page daily.
In a way, Facebook shadow bans people. You have to pay to place content. Google does something similar, but people with some understanding look beyond the first few search results and even the first page.
The Decision
This problem I saw was a part of a storm of problems I have been seeing lately. You see, all I really want to do is write and get enough to live off of and maybe put something in the bank for ‘rainy days’. It’s not that complex of a goal. People have told me all of my life I should write, so here I am, scribbling away. I haven’t published the books I have wanted to yet, but yes, here I am, writing, and part of that writing is research – even for, and especially for, fiction. Fiction has to make sense, reality does not2.
First, generative AI came out and I had to look at how well it wrote – which isn’t really that great, but it gets it’s writing ability from analyzing popular stuff, so in time it should catch up with your local tabloid and perhaps even surpass it. It even has the capacity to become better than your local tabloid, and can regurgitate facts, so it will do some popular writing and maybe even do it well – but – it doesn’t have the capacity to be human, and it never will.
Marketing has already got it spamming Amazon.com with eBooks, and Facebook has plenty of advertising about selling generated content. I even saw advertising on Facebook for generated content that passes AI detection, which is a great way to get money from people who aren’t actually creating anything of value but instead overtaking the web with generated content, a further step down from the marketed Internet as it is3.
I started a Facebook group as a way to get people to interact with my content because WordPress.com sort of sucks at that with the separate logins. That wasn’t working out too well, and instead just became a handful of people who I knew pretty well but never shared anything I wrote with their friends, as if I were some sort of secret that they kept to themselves – but the Facebook algorithms shadow ban anyone who doesn’t pay. I did promote a few posts in the past, but it just made no sense and wasn’t cost effective for me.
Factor in that WordPress.com and Tumblr, where my sites presently reside, volunteered users and then told how not to volunteer selling blogs and perhaps even personal information for AI training. I wrote about what you can do about that, but really, the trust has been eroded there. As someone who pays for hosting, it’s injurious insult and insulting injury at the same time.
Without reflection, without mercy, without shame, they built strong walls and high, and compassed me about.
And here I sit now and consider and despair.
It wears away my heart and brain, this evil fate: I had outside so many things to terminate.
Oh! why when they were building could I not beware!
But never a sound of building, never an echo came. Insensibly they drew the world and shut me out.
C.P. Cavafy, “Walls”
The trouble is that I’m not very good at despair. People have moved my cheese before. When there is no clear path, you think like a burglar or a man with a sledgehammer.
The decision became easy once I took it all in within my broader context. My Facebook use would become passive, a placeholder. I’ll check in on it now and then, but there’s no real reason to be there training Meta’s AI on how to fleece users of their digital shadows. I’ll just not play their game because I don’t like their prizes.
But It Doesn’t End There.
While backing up some things to my Google Drive, I ran across something interesting when I was giving the appropriate permissions. Below should show up for you if you get to the link:
My activities off Meta technologies? Well, who has been sharing data with Meta as part of their business analytics? It ends up it was a pretty exhaustive list.
I was doing them one by one when I found the way to do it en masse. Pictured, you can see part of the 263 websites that were apparently reporting to Meta about stuff, which was closer to 300 before I started counting. I’d say maybe 280 or so, but I don’t know, I was in a rhythm.
I disconnected them all.
The only thing that’s happening now to Facebook is updating the RealityFragments page, which presently happens automatically because of settings on Facebook and even that may end soon as I think that through. After all, I noticed that the updates hadn’t been happening automatically for some time.
In the end, I’ve found Facebook not worth it and I was only continuing out of habit. It certainly took more information from me than I like, and it gave me little in return.
So What Social Network Will I Use?
One person suggested Mastodon. Mastodon, when I did try to use it a few times, made me feel like I was attempting to ‘shag an unwilling Octopus’. LinkedIn was also suggested, but the joke I have heard from people way too many times is, “You still use LinkedIn?”.
Twitter has been Musked, and I never really liked it anyways – it has always been cliquish, and as unfriendly as it is now though with more oversight and less Musk.
I’ll look things over as time permits, but my focus now is on writing – not handing people my data so that they can make money. To be fair, Mastodon has the most promise on paper, but my experience with trying to set that up last year did not make me think highly of it – and I’ve compiled my own Linux kernels. Maybe that has changed.
I know that I’ve changed. I know that people can find my content through search engines – where most visitors come from for me – and if they want to share it, they can. I’m tired of caring about that.
There are some funny memes going around about TikTok and… Chinease spyware, or what have you. The New York Times had a few articles on TikTok last week that were interesting and yet… missed a key point that the memes do not.
Being afraid of Chinese Spyware while so many companies have been spying on their customers seems more of a bias than anything.
Certainly, India got rid of TikTok and has done better for it. Personally, I don’t like giving my information to anyone if I can help it, but these days it can’t be helped. Why is TikTok an issue in the United States?
It’s not too hard to speculate that it’s about lobbyism of American tech companies who lost the magic sauce for this generation. It’s also not hard to consider India’s reasoning about China being able to push their own agenda, particularly with violence on their shared borders.
Yet lobbying from the American tech companies is most likely, because they want your data and don’t want you to give it to China. They want to be able to sell you stuff based on what you’ve viewed, liked, posted, etc. So really, it’s not even about us.
It’s about the data that we give away daily when browsing social networks of any sort, websites, or even when you think you’re being anonymous using Google Chrome when in fact you’re still being tracked. The people who are advocating banning TikTok aren’t holding anyone else’s feet to the fire, instead using the ‘they will do stuff with your information’ when in fact we’ve had a lot of bad stuff happen with our information over the years.
What value do you get for that? They say you get better advertising, which is something that I boggle at. Have you ever heard anyone wish that they could see better advertising rather than less advertising?
They say you get the stuff you didn’t even know you wanted, and to a degree, that might be true, but the ability to just go browse things has become a lost art. Just about everything you see on the flat screen you’re looking at is because of an algorithm deciding for you what you should see. Thank you for visiting, I didn’t do that!
Even that system gets gamed. This past week I got a ‘account restriction’ from Facebook for reasons that were not explained other than instructions to go read the community standards because algorithms are deciding based on behaviors that Facebook can’t seem to explain. Things really took off with that during Covid, where even people I knew were spreading some wrong information because they didn’t know better and, sometimes, willfully didn’t want to know better or understand their own posts in a broader context.
Am I worried about TikTok? Nope. I don’t use it. If you do use TikTok, you should. But you should worry if you use any social network. It’s not as much about who is selling and reselling information about you as much as what they can do with it to control what you see.
Of course, most people on those platforms don’t see them for what they are, instead taking things at face value and not understanding the implications it has on choices they will have in the future that could range from advertising to content that one views.
One of the ongoing issues that people maybe haven’t paid as much attention to is related to the United States Supreme Court and social networks.
That this has a larger impact than just within the United States takes a little bit of understanding. Still, we’ll start in the United States and what started the ball rolling.
“A majority of the Supreme Court seemed wary on Monday of a bid by two Republican-led states to limit the Biden administration’s interactions with social media companies, with several justices questioning the states’ legal theories and factual assertions.
Most of the justices appeared convinced that government officials should be able to try to persuade private companies, whether news organizations or tech platforms, not to publish information so long as the requests are not backed by coercive threats….”
This deals with the last United States Presidential Election, and we’re in an election year. It also had a lot to do with the response to Covid-19 and a lot of false information that was spread, and even there we see arguments about about whether the government should be the only one spreading false information.
Now I’ll connect this to the rest of the planet. Social networks, aside from the 800lb Chinese Gorilla (TikTok) are mainly in the United States. Facebook. The Social Network formerly known as Twitter. So the servers all fall under US jurisdiction.
Why is that data important? Because it’s being used to train Artificial Intelligences. It’s about who trains their artificial intelligence’s faster, really.
It’s also worth noting that in 2010, the U.S. Supreme Court decided that money was free speech. This means, since technology companies lobby and support politicians, the social networks you use have more free speech than the users combined based on their income alone – not to mention their ability to choose what you see, what you can say, and who you can say it to by algorithms that they can’t seem to master themselves. In a way that’s heartening, in a way it’s sickening.
So, the Supreme Court ruling on issues of whether the United States government’s interference in social networks is also about who collects the data, and what sort of information will be used to train artificial intelligences of the present and future.
The dots are all there, but it seems like people don’t really understand that this isn’t as much a fight for individual freedom of speech as it is about deciding what future generations will be told.
So who should control what you can post? Should governments decide? Should technology companies?
These days, few trust either. It seems like we need oversight on both, which will never happen on a planet where everybody wants to rule the world. Please fasten your seat-belts.
Yesterday, I was listening to the webinar on Privacy Law and the United States First Amendment when I heard that lawyers for social networks are claiming both that they have free speech as a network as a speaker, as well as claiming not to be the speaker and claiming they are simply are presenting content users have expressed under the Freedom of Speech. How the arguments were presented I don’t know, and despite showing up for the webinar I am not a lawyer1. The case before the Supreme Court was being discussed, but that’s not my focus here.
I’m exploring how it would be possible to claim that a company’s algorithms that impact how a user perceives information could be considered ‘free speech’. I began writing this post about that and it became long and unwieldy2, so instead I’ll write a bit about the broader impact of social networks and their algorithms and tie it back.
Algorithms Don’t Make You Obese or Diabetic.
If you say the word ‘algorithm’ around some people, their eyes immediately glaze over. It’s really not that complicated; a repeatable thing is basically an algorithm. A recipe when in use is an algorithm. Instructions from Ikea are algorithms. Both hopefully give you what you want, and if they don’t, they are ‘buggy’.
Let’s go with the legal definition of what an algorithm is1. Laws don’t work without definitions, and code doesn’t either.
“An algorithm is a set of rules or a computational procedure that is typically used to solve a specific problem. In the case of Vidillion, Inc. v. Pixalate Inc. an algorithm is defined as “one or more process(es), set of rules, or methodology (including without limitation data points collected and used in connection with any such process, set of rules, or methodology) to be followed in calculations, data processing, data mining, pattern recognition, automated reasoning or other problem-solving operations, including those that transform an input into an output, especially by computer.” With the increasing automation of services, more and more decisions are being made by algorithms. Some examples are; criminal risk assessments, predictive policing, and facial recognition technology.”
By this definition and perhaps in it’s simplest form, adding two numbers is an algorithm, which also fits just about any technical definition out there. That’s not at issue.
What is at issue in the context of social networks is how algorithms impact what we view on a social networking website. We should all understand in the broad strokes that Facebook, Twitter, TikTok and their ilk are in the business of showing people what they want to see, and to do this they analyze what people view so that they can give people what they want.
Ice cream and brownies for breakfast, everyone!
Let’s agree every individual bit of content you see that you can act on, such as liking or re-transmitting, is a single item. Facebook sees you like ice cream, Facebook shows you posts of ice cream incessantly. Maybe you go out and eat ice cream all the time because of this and end up with obesity and diabetes. Would Facebook be guilty of making you obese and diabetic?
Fast food restaurants aren’t considered responsible for making people obese and diabetic. We have choices about where we eat, just as we have choices about what we do with our lives outside of a social network context. Further, almost all of these social networks give you options to not view content, from blocking to reporting to randomly deleting your posts and waving a finger at you for being naughty – without telling you how.
Timelines: It’s All A Story.
As I wrote elsewhere, we all choose our own social media adventures. Most of my social networks are pretty well tuned to feed me new things to learn every day, while doing a terrible job of providing me information on what all my connections are up to. It’s a real estate problem on social network sites, and not everyone can be in that timeline. Algorithms pick and choose, and if there are paid advertisements to give you free access, they need space too.
Think of it all as a personal newspaper. Everything you see is picked for you based on what the algorithms decide, and yet all of that information is competing to get into your eyeballs, maybe even your brain. Every story is shouting ‘pick me! pick me!’ with catchy titles, wonderful images, and maybe even some content – because everyone wants you to click to their website so you can hammer them with advertising.4
Yet when we step back from those individual stories, the social networking site is curating things in a chronological order. Let’s assume that what it thinks you like to see the most is at the top and it goes down in priority based on what the algorithms have learned about you.
Now think of each post as a page in a newspaper. What’s on the front page affects how you perceive everything in the newspaper. Unfortunately, because it’s all shoved into a prioritized list for you, you get things that are sometimes in a strange order, giving a weird context.
Sometimes you get stray things you’re not interested in because the algorithms have grouped you with others. Sometimes the priority of what you last wrote about will suddenly have posts related to it covering every page in that newspaper.
You might think you’re picking your own adventure through social media, but you’re not directly controlling it. You’re randomly hitting a black box to see what comes out in the hope that you might like it, and you might like the order that it comes in.
We’re all beta testers of social networks in that regard. They are constantly tweaking algorithms to try to do better, but doing better isn’t necessarily for you. It’s for them, and it’s also for training their artificial intelligences more than likely. It’s about as random as human interests are.
Developing Algorithms.
Having written software in various companies over the decades, I can tell you that if there’s a conscious choice to express something with them, to get people to think one way or the other (the point of ‘free speech’), it would have to be very coordinated.
Certain content would have to be weighted as is done with advertising. Random content churning through feeds would not fire things off with the social networking algorithms unless they manually chose to do so across users. That requires a lot of coordination, lots of meetings, and lots of testing.
It can be done. With advertising as an example, it has been done overtly. Another example is the last press against fake news, which has attempted to proactively check content with independent fact checkers.
Is that free speech? Is that freedom of expression of a company? If you look at this case again, you will likely draw your own conclusions. Legally, I have no opinion because I’m not a lawyer.
But as a software engineer, I look at it and wonder if this is a waste of the Court’s time.
It should be in the interest of software engineers and others about the legal aspects of what we have worked on and will work on. Ethics are a thing. ↩︎
It still is, and I apologize if it’s messy. This is a post I’ll likely have to revisit and edit. ↩︎
Legal definitions of what an algorithm is might vary around the world. It might be worth searching for a legal definition where you are. ↩︎
This site has advertising. It doesn’t really pay and I’m not going to shanghai viewers by misrepresenting what I write. It’s a choice. Yet to get paid for content, that’s what many websites do. If you are here, you’re appreciated. Thanks!↩︎
It sort of already has, as even he points out in his article.
That, you see, is the trouble. We don’t know the training models for these artificial intelligences, we don’t know what biases are inherent in it, and we’re at the mercy of whoever is responsible for these artificial intelligences. We’re hoping that they’re thoughtful and considerate and not more concerned with money than people.
That really hasn’t worked out so well for us in the past. Yet the present is here in all it’s glory, unrepentant. It’s happening more obviously now with the news since next year we get artificial news anchors. It’s being used to fight misinformation on social media platforms like Facebook without even explaining to Facebook users why posts are removed and what they contained that was worth removing them for. It’s here and has been here for a while.
Pandora’s box has been opened, and the world will never quite be the same again. Archimedes once talked about having a lever long enough.
Nowadays it’s just a matter of a choice of fulcrum.
Democracy, based on the idea that informed people can make informed choices in their own interest and the common good, could easily become misDemocracy, where the misinformed make misinformed choices that they think is in their own interests and what they think is the common good.
It’s likely at some point we’ve all spread some misinformation involuntarily. It can have dire consequences, too. Washington Post has an article on misinformation but they forgot the most important thing, I think.
Waiting.
‘Trusted sources’ has been a problem that I’ve been studying since we were working on the Alert Retrieval Cache. In an actual emergency, knowing which information you can trust from the ground and elsewhere is paramount. I remember Andy Carvin asking me how Twitter could be used for the same and I shook my head, explaining the problem that no one seemed to want to listen to: The problem is that an open network presents problems with flawed information getting accepted as truth.
Credentialism is a part of the problem. We expect experts to be all-knowing when in fact being an expert itself has no certification. It requires being right before, all the while we want right now and unfortunately the truth doesn’t work that way.
We see a story on social media and we share it, sometimes without thinking, which is why bad news travels faster than good news.1
The easiest way to avoid spreading misinformation is to do something we’re not very good at in a society that pulses like a tachycardic heart: We wait and see what happens. We pause, and if we must pass something along to our social networks, we say we’re not sure it’s real, but since headlines are usually algorithm generated to catch eyes and to spread them like Covid-19, we have to read the stories and check the facts before we share rather than share off the cuff.
Somewhere along the line, the right now trumped being right, and we see it everywhere. By simply following a story before sharing it, you can stop spreading misinformation and stop the virus of misinformation in it’s tracks. Let the story develop. See where it goes. Don’t jump in immediately to write about it when you don’t actually know much about it.
Check news sources for the stories. Wait for confirmation. If it’s important enough to post, point out that it’s unconfirmed.
News was once trusted more, where the people presenting the news were themselves trusted to give people the facts. There were narratives even then, yet there was a balance because of the integrity of the people involved.
What could possibly go wrong with a news source that is completely powered by artificial intelligence?
Misinformation. Oddly enough, Dr Daniel Williams wrote an interesting article on misinformation, pointing out that misinformation could be a symptom instead of the actual problem. He makes some good points, though it does seem a chicken and egg issue at this point. Which came first? I don’t think anyone can know the answer to that, and if they did, they’d probably not be trusted because things have gotten that bad.
At the same time, I look through my Facebook memories just about every day and note more and more content that I had shared is… gone. Deleted. There’s no reasoning given, and when I do find out that something I shared has been deleted, it’s as informative as a random nun wandering around with a ruler, rapping people’s knuckles and not telling them why she’s doing it.
Algorithms. I don’t know that it’s censorship, but they sure do weed a lot of content and that makes me wonder how much content gets weeded elsewhere. I’m not particularly terrible with my Facebook account or any other account. Like everyone else, I have shared things that I thought to be true that ended up not being true, but I don’t do that very often because I’m skeptical.
We would like to believe integrity is inherent in journalism, but the water got muddied somewhere along the way when news narratives and editorials became more viewed than the actual facts. With the facts, it’s easy to build one’s own narrative though not easy enough when people are too busy making a living to do so. Further, we have a tendency toward viewing that which fits our own world view, the ‘echo chambers’ that pop up now and then such as echoed extremism. To have time to expand beyond our echo chambers, we need to find time to do so and be willing to have our own world views challenged.
Instead, most people are off chasing the red dots, mistaking sometimes being busy as being productive. At a cellular level, we’re all very busy, but that doesn’t mean we’re productive, that we’re adding value to the world around us somehow. There is something to Dr. Daniel Williams’ points on societal malaise.
A news network run completely by artificial intelligence mixed with the world as we have it now doesn’t seem ideal, yet the idea has it’s selling points because media itself isn’t trusted largely because media is built around business, and business is built around advertising, advertising in turn is a game of numbers and to get the numbers you have to get eyeballs looking at the content. Thus, propping up people’s world views is more important when the costs of doing all of that are higher. Is it possible that decreasing the costs would decrease the need to prop up world views for advertising?
In discussion with another writer over coffee, I found myself explaining biases in the artificial intelligences – particularly large language models – as something that is recent. Knowledge has been subject to this for millenia.
Libraries have long been considered our centers of knowledge. They have existed for millenia and have served as places of stored knowledge for as long, attracting all manner of knowledge to their shelves.
Yet there is a part of the library, even the modern library, which we don’t hear as much about. The power of what is in the collection.
‘Strict examination’ of library volumes was a euphemism for state censorship
Like any good autocrat, Augustus didn’t refrain from violent intimidation, and when it came to ensuring that the contents of his libraries aligned with imperial opinion, he need not have looked beyond his own playbook for inspiration. When the works of the orator/historian Titus Labienus and the rhetor Cassius Severus provoked his contempt, they were condemned to the eternal misfortune of damnatio memoriae, and their books were burned by order of the state. Not even potential sacrilege could thwart Augustus’ ire when he ‘committed to the flames’ more than 2,000 Greek and Latin prophetic volumes, preserving only the Sibylline oracles, though even those were subject to ‘strict examination’ before they could be placed within the Temple of Apollo. And he limited and suppressed publication of senatorial proceedings in the acta diurna, set up by Julius Caesar in public spaces throughout the city as a sort of ‘daily report’; though of course, it was prudent to maintain the acta themselves as an excellent means of propaganda.
Of course, the ‘editing’ of a library is a difficult task, with ‘fake news’ and other things potentially propagating through human knowledge. We say that history is written by the victors, and to a great extent this is true. Spend longer than an hour on the Internet and you may well find something that should be condemned to flame, or at least you’ll think so. I may even agree. The control of information has historically been central, and nothing has changed in this regard. Those who control the information control how people perceive the world we live in.
There’s a fine line between censorship and keeping bad information out of a knowledge base. What is ‘bad’ is subjective. The flat earth ‘theory’, which has gained prominence in recent years, is simply not possible to defend if one looks at the facts in entirety. The very idea that the moon could appear as it does on a flat earth would have us re-examine a lot of science. It doesn’t make sense, so where is the harm in letting people read about it? There isn’t, really, and is simply a reflection on how we have moved to such heights of literacy and such lows of critical thought.
The answer at one time was the printing press, where ideas could be spread more quickly than the manual labor, as loving as it might have been, of copying books. Then came radio, then came television, then came the Internet – all of which have suffered the same issues and even created new ones.
What gets shared? What doesn’t? Who decides?
This is the world we have created artificial intelligences in, and these biases feed the biases of large language models. Who decides what goes into their training models? Who decides what isn’t?
Slowly and quietly, the memory of damnation memoriae glows like a hot ember, the ever present problem with any form of knowledge collection.