Damnatio Memoriae

In discussion with another writer over coffee, I found myself explaining biases in the artificial intelligences – particularly large language models – as something that is recent. Knowledge has been subject to this for millenia.

Libraries have long been considered our centers of knowledge. They have existed for millenia and have served as places of stored knowledge for as long, attracting all manner of knowledge to their shelves.

Yet there is a part of the library, even the modern library, which we don’t hear as much about. The power of what is in the collection.

‘Strict examination’ of library volumes was a euphemism for state censorship

Like any good autocrat, Augustus didn’t refrain from violent intimidation, and when it came to ensuring that the contents of his libraries aligned with imperial opinion, he need not have looked beyond his own playbook for inspiration. When the works of the orator/historian Titus Labienus and the rhetor Cassius Severus provoked his contempt, they were condemned to the eternal misfortune of damnatio memoriae, and their books were burned by order of the state. Not even potential sacrilege could thwart Augustus’ ire when he ‘committed to the flames’ more than 2,000 Greek and Latin prophetic volumes, preserving only the Sibylline oracles, though even those were subject to ‘strict examination’ before they could be placed within the Temple of Apollo. And he limited and suppressed publication of senatorial proceedings in the acta diurna, set up by Julius Caesar in public spaces throughout the city as a sort of ‘daily report’; though of course, it was prudent to maintain the acta themselves as an excellent means of propaganda.

The Great Libraries of Rome“, Fabio Fernandes, Aeon.com, 4 August 2023

Of course, the ‘editing’ of a library is a difficult task, with ‘fake news’ and other things potentially propagating through human knowledge. We say that history is written by the victors, and to a great extent this is true. Spend longer than an hour on the Internet and you may well find something that should be condemned to flame, or at least you’ll think so. I may even agree. The control of information has historically been central, and nothing has changed in this regard. Those who control the information control how people perceive the world we live in.

There’s a fine line between censorship and keeping bad information out of a knowledge base. What is ‘bad’ is subjective. The flat earth ‘theory’, which has gained prominence in recent years, is simply not possible to defend if one looks at the facts in entirety. The very idea that the moon could appear as it does on a flat earth would have us re-examine a lot of science. It doesn’t make sense, so where is the harm in letting people read about it? There isn’t, really, and is simply a reflection on how we have moved to such heights of literacy and such lows of critical thought.

The answer at one time was the printing press, where ideas could be spread more quickly than the manual labor, as loving as it might have been, of copying books. Then came radio, then came television, then came the Internet – all of which have suffered the same issues and even created new ones.

What gets shared? What doesn’t? Who decides?

This is the world we have created artificial intelligences in, and these biases feed the biases of large language models. Who decides what goes into their training models? Who decides what isn’t?

Slowly and quietly, the memory of damnation memoriae glows like a hot ember, the ever present problem with any form of knowledge collection.

Political And AI Intrigue In Social Media.

I normally don’t follow politics because politics doesn’t really follow me – it tends to stalk me instead. Yet today, with social media in the headline, I paid attention – because it’s not just politics involved. There’s artificial intelligence as well, or what is accused of it.

From the first article:

A US federal judge has limited the Biden administration’s communications with social media companies which are aimed at moderating their content.

In a 155-page ruling on Tuesday, judge Terry Doughty barred White House officials and some government agencies from contacting firms over “content containing protected free speech”.

It is a victory for Republicans who have accused officials of censorship.

Democrats said the platforms have failed to address misinformation.

The case was one of the most closely-watched First Amendment battles in the US courts, sparking a debate over the government’s role in moderating content which it deemed to be false or harmful…

Biden officials must limit contact with social media firms“, BBC News, Annabelle Liang, 5th July, 2023.

By itself, it’s pretty damning for the Democrats, who like the Republicans, aren’t my favorite people in the world. It isn’t an either/or proposition, but it’s usually simplified to that so that both sides keep reading for advertising.

Now here’s the second article.

Evidence of potential human rights abuses may be lost after being deleted by tech companies, the BBC has found.

Platforms remove graphic videos, often using artificial intelligence – but footage that may help prosecutions can be taken down without being archived.

Meta and YouTube say they aim to balance their duties to bear witness and protect users from harmful content.

But Alan Rusbridger, who sits on Meta’s Oversight Board, says the industry has been “overcautious” in its moderation…

AI: War crimes evidence erased by social media platforms“, BBC Global Disinformation Team, Jack Goodman and Maria Korenyuk, 1 June 2023.

The artificial intelligence angle is from a month ago. The political angle dealing with Democrats and Republicans (oh my!) is today, because of the Federal Judge’s ruling. Both deal with content being removed on social media.

The algorithms on social media removing content related to Ukraine is not something new when it comes to Meta, because yours truly spent time in Facebook jail for posting an obvious parody of a Ukrainian tractor pulling the Moskov – before it was sunk. It labeled it as false information, which of course it was – it was a parody, and any gullible idiot who thought a Ukrainian tractor was pulling the Moskov deserves to be made fun of.

Clearly, the Moskov would need 2 Ukrainian tractors to pull it. See? Again, comedic.

These stories are connected in that the whole idea of ‘fake news’ and ‘trusted information’ has been politicized just about everywhere, and by politicized I also mean polarized. Even in Trinidad and Tobago, politicians use the phrases as if they are magical things one can pull out of… an orifice.

Algorithms, where they are blaming AI, are injecting their own bias by removing and leaving some content. Is some of this related to the ruling about Biden officials? I imagine it is. How much of a part of it is debatable – yet, during Covid, people were spreading a lot of fake news that worked against the public interests related to health.

The political angle had a Federal Court intervene. No such thing has happened with the artificial angle. That’s disturbing.

Looks like getting beyond Code 2.0 is becoming more important, or more late. What you see in the echo chambers of social media are just red dots, shining on the things others want us to see, and not necessarily the right things.

Beyond The Moat.

In the world we like to talk about since it reflects ourselves, technology weaves dendritically through our lives. Much of it is invisible to us in that it is taken for granted.

The wires overhead spark with Nikola Tesla’s brilliance, the water flowing in pipes dating all the way back 3000-4000 BC in the Indus Valley, the propagation of gas for cooking and heat and the automobiles we spend way too much time in.

Now, even internet access for many is taken for granted as social media platforms vie for timeshares of our lives, elbowing more and more from many by giving people what they want. Except Twitter, of course, but for the most part social media is the new Hotel California – you can check out any time you like, but you may never leave as long as people you interacted with are there.

This is why when I read Panic about overhyped AI risk could lead to the wrong kind of regulation, I wondered about what wasn’t written. It’s a very good article which underlines the necessity of asking the right questions to deal with regulation – and attempting to undercut some of the hype against it. Written by a machine learning expert, Divyansh Kaushik, and by Matt Korda, it reads really well about what I agree could be a bit too much backlash against the artificial intelligence technologies.

Yet their jobs are safe. In Artificial Extinction, I addressed much the same thing but not as an expert but as a layperson who sees the sparking wires, flowing water, cars stuck in traffic, and so on. It is not far-fetched to see that the impacts of artificial intelligence are beyond the scope of what experts on artificial intelligence think. It’s what they omit in the article that is what should be more prominent.

I’m not sure we’re asking the right questions.

The economics of jobs gets called into question as people who spent their lives doing something that can be replaced. This in turn affects a nation’s economy, which in turn affects the global economy. China wants to be a world leader in artificial intelligence by 2030 but given their population and history of human rights, one has to wonder what they’ll do with all those suddenly extra people.

Authoritarian governments could manipulate machine learning and deep learning to assure everyone’s on the same page in the same version of the same book quite easily, with a little tweaking. Why write propaganda when you can have a predictive text algorithm with a thesaurus of propaganda strapped to it’s chest? Maybe in certain parts of Taliban controlled Afghanistan, it will detect that the user is female and give it a different set of propaganda, telling the user to stay home and stop playing with keyboards.

Artificial Extinction, KnowProSE.com, May 31st 2023.

These concerns are not new, but they are made more plausible with artificial intelligence because who controls them controls much more than social media platforms. We have really no idea what they’re training the models on, where that data came from, and let’s face it – we’re not that great with who owns whose data. Henrietta Lacks immediately comes to mind.

My mother wrote a poem about me when I joined the Naval Nuclear Propulsion program, annoyingly pointing out that I had stored my socks in my toy box as a child and contrasting it with my thought at the time that science and technology can be used for good. She took great joy in reading it to audiences when I was present, and she wasn’t wrong to do so even as annoying as I found it.

To retain a semblance of balance between humanity and technology, we need to look at our own faults. We have not been so great about that, and we should evolve our humanity to keep pace with our technology. Those in charge of technology, be it social media or artificial intelligence, are far removed from the lives of people who use their products and services despite them making money from the lives of these very same people. It is not an insult, it is a matter of perception.

Sundar Pichai, CEO of Google, seemed cavalier about how artificial intelligence will impact the livelihoods of some. While we all stared at what was happening with the Titan, or wasn’t, the majority of people I knew were openly discussing what sorts of people would spend $250K US to go to a very dark place to go look at a broken ship. Extreme tourism, they call it, and it’s within the financial bracket of those who control technologies now. The people who go on such trips to space, or underwater, are privileged and in that privilege have no perspective on how the rest of the world gets by.

That’s the danger, but it’s not the danger to them and because they seem cavalier about the danger, it is a danger. These aren’t elected officials who are controlled through democracy, as much of a strange ride that is.

These are just people who sell stuff everybody buys, and who influence those who think themselves temporarily inconvenienced billionaires to support their endeavors.

It’s not good. It’s not really bad either. Yet we should be aspiring toward ‘better’.

Speaking for myself, I love the idea of artificial intelligence, but that love is not blind. There are serious impacts, and I agree that they aren’t the same as nuclear arms. Where nuclear arms can end societies quickly, how we use technology and even how many are ignorant of technology can cause something I consider worse: A slow and painful end of societies as we know them when we don’t seem to have any plans for the new society.

I’d feel a lot better about what experts in silos have to say if they… weren’t in silos, or in castles with moats protecting them from the impacts of what they are talking about. This is pretty big. Blue collar workers are under threat from smarter robots, white collar workers are under threat, and even the creative are wondering what comes next as they no longer are as needed for images, video, etc.

It is reasonable for a conversation that discusses these things to happen, and this almost always happens after things have happened.

We should be aspiring to do better than that. It’s not the way the world works now, and maybe it’s time we changed that. We likely won’t, but with every new technology, we should have a few people pointing that out in the hope that someone might listen.

We need leaders to understand what lays beyond the moat, and if they don’t, stop considering them leaders. That’s why the United States threw a tea party in Boston, and that’s why the United States is celebrating Independence Day today.

Happy Independence Day!

Twitter’s Just Another Thing To Route Around.

There are people who like Elon Musk to the point of depravity, and there’s not much to do about that. I don’t bother writing about him because generally speaking, he doesn’t cross over into my world very often. Every company he has been involved in has not really added value to me – from PayPal to Tesla and now to Twitter.

When he took over Twitter – a platform I generally use only to track live events from sources I trust – I wasn’t worried. Most of these sources aren’t ‘verified Twitter’ folks, but people who have been consistently on the money over the years.

The cost of the new Twitter API is something I covered before in the context of WordPress.com, and now it seems the story has finally made it to Mashable in the broader context. It seems a bit late, actually, so I don’t know why it took so long for the story to come out, but come out it did.

$5,000 a month is definitely not a figure for developers, considering the level of transactionality developers are used to. If I were asked to spend that, I’d expect steak dinners every night with a cardiologist on the payroll. Twitter, which was once the Wild West, is being gentrified – which is not a kind use of the word.

Still, it’s something people are routing around, because when things become tough to work with on the Internet, we find ways around it. Since I’m not as vested in Twitter usage, it’s not a big deal for me. Every now and then I tweet something related to what I’m writing, or comment on something that I’m keeping an eye on.

Yet the way it is being handled is… poor. Some folks are finding out about things the hard way. This (borrowed from Mashable’s work, so props to them) is a pretty bad way to find something out.

It’s not often a social media company becomes outright hostile to it’s users – the ones who did find value in Twitter. People are moving to Telegram and other platforms.

Personally, I think Twitter was on a decent path until Musk decided to be the Dictator-of-Twits, but I had misgivings on the trolling amongst other things – and I think trusted sources mean something other than what was happening and what is happening now.

However you feel about it, it’s a matter of what works for you. Yet a lot of popular content won’t be on Twitter anymore, and that creates new problems for keeping track of whose content you like. I can’t even make a suggestion on it, because some go here, some go there…

For the record, I don’t like any of the social media platforms presently for this, largely because of an account bias: Accounts can become popular but content that is worthwhile isn’t necessarily the best in some instances.

Internet Detritus.

Back in 1996 I was driving to work in the Clearwater, Florida area and saw a billboard to Brainbuzz.com, now viewable only through the Wayback Machine. I joined, and I ended up writing for them. Not around anymore.

They became CramSession.com, where I continued writing for them. I had roughly 100 articles I wrote for them about software engineering and C++ which are just… gone. Granted, that was over 2 decades ago, but it’s peculiar to live longer than all these companies that thrived during the Dot Com Bubble, which should be taught in high school now as a part of world history. It isn’t, of course, but it should.

Consciously, we distill good things and keep moving them forward, but sometimes because of copyright laws, things get orphaned in companies that closed their digital doors. Generations afterward, it’s hard to convey this lack of permanence to future generations because the capacity for things to last ‘forevah’ seems to be built into some social media, but it’s hidden away by algorithms which is effectively the same thing.

Sometimes bubbles of information get trapped in the walls of an imploded company. It could happen even to the present 800 lb gorillas on the Internet now. The future is one thing that nobody will tell you in their end of the year posts: It’s unpredictable. The world changes more and more rapidly and we forget how much gets left behind at times.

“When the Lilliputians first saw Gulliver’s watch, that “wonderful kind of engine…a globe, half silver and half of some transparent metal,” they identified it immediately as the god he worshiped. After all, “he seldom did anything without consulting it: he called it his oracle, and said it pointed out the time for every action in his life.” To Jonathan Swift in 1726 that was worth a bit of satire. Modernity was under way. We’re all Gullivers now. Or are we Yahoos?”

Faster: The Acceleration of Just About Everything, James Gleick, 2000.

What’s really funny about that quote is that Yahoo.com was more of a player in the search engine space back then. In fact, in 1998, Yahoo was the most popular search engine, and that it’s still around is actually a little impressive given all that happened after the DotCom Bubble popped. So the quote itself hasn’t aged that well which demonstrates the point I am making.

Nothing really lasts on the Internet, and even with the WayBack machine (thank you, Internet Archive!), much of what was is simply no longer, subject to what companies owned copyrights of the information, or a simple matter of what things have been kept around through what boils down to popularity.

And what’s popular isn’t always good. I submit to you any elected official you dislike to demonstrate that popularity is subjective – and on the Internet, popularity is largely about marketing and money spent toward that end. The Internet, as it stands, is the house that we built based on what made money.

That’s not particularly attractive.

In the end, it all sort of falls away. And coming generations will see it as well, some may have already begun seeing it.

Who decides what stays on the Internet? Why, we do of course, one click at a time.

Now imagine this fed into an artificial intelligence’s deep learning model. The machine learning would be taught only what has survived, not what has failed -and this could be seen as progress. I think largely it is, despite myself – but what important stuff do we leave behind?

We don’t know, because it ain’t there.

Bubbles Distilled By Time.

We all perceive the world through our own little bubbles. As far as our senses go, we only have touch, taste, feeling, hearing, smell and sight to go by. The rest comes from what we glean through those things, be it other people, technology, language, culture, etc.

If the bubble is too small, we feel it a prison and do our best to expand it. Once it’s comfortable, we don’t push it outward as much.

These little bubbles contain ideas that have passed down through the generations, how others have helped us translate our world and all that is in it, etc. We’re part of a greater distillation process, where because of our own limitations we can’t possibly carry everything from previous generations.

If we consider all the stuff that creates our bubble as little bubbles themselves that we pass on to the next generation, it’s a distillation of our knowledge and ideas over time. Some fall away, like the idea of the Earth being the center of the Universe. Some stay with us despite not being used as much as we might like – such as the whole concept of, ‘be nice to each other’.

If we view traffic as something going through time, bubbles are racing toward the future all at the same time, sometimes aggregating, sometimes not. The traffic of ideas and knowledge is distilled as we move forward in time, one generation at a time. Generally speaking, until broadcast media this was a very local process. Thus, red dots trying to get us to do things, wielded by those who wish us to do things from purchasing products to voting for politicians with their financial interests at heart.

Broadcast media made it global by at first giving people information and then by broadcasting opinions to become sustainable through advertising. Social media has become the same thing. How will artificial intelligences differ? Will ChatGPT suddenly spew out, “Eat at Joes!”? I doubt that.

However, those with fiscal interests can decide what the deep learning of artificial intelligences are exposed to. Machine learning is largely about clever algorithms and pruning the data that the algorithms are trained on, and those doing that are certainly not the most unbiased of humanity. I wouldn’t say that they are the most biased either – we’re all biased by our bubbles.

It’s Pandora’s Box. How do we decide what should go in and what should stay out? Well, we can’t, really. Nobody is actually telling us what’s in them now. Our education systems, too, show us that this is not necessarily something we’re good at.

Bias in AI, Social Media, and Beyond.

One of the things that is hard to convey to many people is how bias actually affects things. So I’ll offer a unique perspective, one that involves hamburgers.

All good stories should have a good burger of some sort, whatever your meat or lack of meat allows for. Some people will see ‘burger’ and go for the default of beef in their head, some people will think chicken or turkey or lamb or mushroom or… that right there is a bias.

I’ll go a bit further.

My father, well into his 50s, felt like having a hamburger and I asked him why we didn’t just make them instead of going out and buying some crappy burgers. He admitted something that floored me.

He didn’t know how to make them. Here he was, having lived decades eating burgers, but he never learned how to make burger patties. My father. The guy who always seemed within 10 feet of a burger joint when it came to feeding times.

Now, why was that?

First, he grew up in a Hindu home, and beef was not on the menu at home. He never would have been exposed in that household on how to make a beef patty – or a beef anything, for that matter. So he had an implicit bias from the start on not knowing how to make a hamburger.

He did, according to his oral history, like eating hamburgers, and would go to a place near his school to eat some. His eyes would glow when he discussed that memory, as simple as it might be.

Now, he also got married in the 1970s in the U.S., and Mom handled all the cooking. We cooked burgers there, but he managed to not learn about making the patties. He worked night shift, and so he wasn’t around most of the day anyway. More bias on him not learning how to make a hamburger, which an American of his generation generally considers an art form – but he was not American. More bias.

After decades, he assumed that learning how to make them was beyond him – which seemed peculiar considering how much time and care he would put into an omelette.

If my father were an AI of some sort and you asked him about how to make a beef patty, he would have likely said, “they come in stores.” While not knowing how to make burger patties is a pretty low threshold when compared to human extinction– it’s not hard to see how omitting information can be a handicap and create a bias.

It’s also not hard to see that by creating information or perspectives can also create bias. If we don’t teach AI about weight loss, an AI might suggest amputation for someone wondering how to lose weight – and even recommend low weight prosthetics. Ridiculous, but we never thought kids would be eating tide pods. We don’t exactly have as high a threshold as we might like to think.

There are good and bad biases, and they’re largely subjective. We see systemic biases now over all sorts of things – can you imagine them happening faster and more efficiently?

Aside from the large sweeping biases of culture, the artificial construct of race, and the availability of information, what other biases do you think can impact an artificial intelligence? Social media? Beyond?

Gaming The Medium

Even as we paint on society’s canvas, society paints on our individual canvases, and in this modern world of the Internet, social media and games, there’s a lot of paint being thrown around. Our world changes us, we change our world.

It’s not all as pretty as staged videos on Instagram, TikTok and Facebook reels, where ‘influencers’ do their best to find attractive red dots for people to chase. It’s in their interest. Before the Internet, it was broadcast media, but now with social media there is an increasingly large illusion of being able to interact when we might just be interacting with some algorithms attached to a dictionary.

Algorithms, though, carry dangers.

Via CuriosityGuide.

The video outlines some of what has been happening that isn’t good.

Algorithms, though, are important and can be used for good. We don’t see that as much as we should, largely because the wide swath of algorithms seem to be at the least questionable in whether they are good or not. That questionability comes from what we all want to see from the world and what cost we wish to pay for it – or, in the case of Internet trolls, the cost which we wish to have others pay for the world they want to see. I have more to write about trolls, but not yet.

What do we want? Before we figure out who we are, we seem to be told who we need to be. We mimic behaviors as children, and we grow within the framework supplied by our environments – rewards and punishments are set. We begin playing the game. In an environment, or, subjectively, an anti-environment.

Life In the Anti-Environment: Learning How To Play is an interesting paper by Adam Pugen in this regard – you can find the PDF of the paper here. It’s focused on video games, yet much of what is in there could apply to social media since the world is increasingly contrived and served through flat screens. This contrivance has been noted and mocked by more than one person. This German artist is a wonderful example, mocking instagram photos.

In any game, there are things that are possible and things that are less possible. One of the more common real world games, a lottery will sell us on the fact that there is a possibility to win despite there being a extremely low probability. The lottery has the distinction of being forced to be honest about the odds, but I have yet to see that honesty in the advertising for a lottery. What do you spend, what do you get? Most people see spending a few dollars every week over the course of their lifetime a worthwhile risk – otherwise there would be no lottery.

The game environment is simply defined. Enter the world of multiplayer games, which connect people through the internet and allow them to interact within certain guidelines. People, of course, find the loopholes and some enjoy the anonymous trolling aspect since they are faceless names and avatars. Others try to play the game by plodding through, others pay to get ahead, all depending on the game and how it is set up. If that doesn’t sound like a metaphor for modern social media, I don’t know what is.

All around the world, people are playing the social media game. How one ‘wins’ is dependent on how one views success, just like everyone else, but since social media is attached to real life more closely than other games there is the financial aspect that is quite real for the majority of the planet. How one loses, implicitly, is by not winning.

Now that we have large language models and the promises of artificial intelligence making things so much better, the game is more complicated.

If money is how we measure success, there are billions of people losing. We could change how we could measure success, or we could change the odds. Right now, the odds seem to be going the wrong way. There has to be some middle ground between tossing out participation trophies and a few winners taking all.

Ideas?