Artificial Extinction.

The discussion regarding artificial intelligence continues, with the latest round of cautionary notes making the rounds. Media outlets are covering it, like CNBC’s “A.I. poses human extinction risk on par with nuclear war, Sam Altman and other tech leaders warn“.

Different versions of that article written by different organizations are all over right now, but it derives from one statement on artificial intelligence:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Center for AI Safety, Open Letter, undated.

It seems a bit much. Granted, depending on how we use AI we could be on the precipice of a variety of unpredictable catastrophes, and while pandemics and nuclear war definitely poses direct physical risks, artificial intelligence poses more indirect risks. I’d offer that can make it more dangerous.

In the context of what I’ve been writing about, we’re looking at what we feed our heads with. We’re looking at social media being gamed to cause confusion. These are dangerous things. Garbage in, Garbage out doesn’t just apply to computers – it applies to us.

More tangibly, though, it can adversely impact our way(s) of life. We talk about the jobs it will replace, with no real plan on how to employ those displaced. Do people want jobs? I think that’s the wrong question that we got stuck with in the old paint on society’s canvas. The more appropriate question is, “How will people survive?”, and that’s a question that we overlook because of the assumption that if people want to survive, they will want to work.

Is it corporate interest that is concerned about artificial intelligence? Likely not, they like building safe spaces for themselves. Sundar Pichai mentioned having more lawyers, yet a lawyer got himself into trouble when he used ChatGPT to write court filings:

“The Court is presented with an unprecedented circumstance,” Castel wrote in a previous order on May 4. “A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases… Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”

The filings included not only names of made-up cases but also a series of exhibits with “excerpts” from the bogus decisions. For example, the fake Varghese v. China Southern Airlines opinion cited several precedents that don’t exist.”

Lawyer cited 6 fake cases made up by ChatGPT; judge calls it “unprecedented”“, Jon Brodkin, ArsTechnica, May 30th 2023

It’s a good thing there are a few people out there relying on facts instead of artificial intelligence, or we might stray into a world of fiction where those that control the large language models and general artificial intelligences that will come later will create it.

Authoritarian governments could manipulate machine learning and deep learning to assure everyone’s on the same page in the same version of the same book quite easily, with a little tweaking. Why write propaganda when you can have a predictive text algorithm with a thesaurus of propaganda strapped to it’s chest? Maybe in certain parts of Taliban controlled Afghanistan, it will detect that the user is female and give it a different set of propaganda, telling the user to stay home and stop playing with keyboards.

It’s not hard to imagine all of this. It is a big deal, but in parts of the world like Trinidad and Tobago, you don’t see much about it because there’s no real artificial intelligence here, even as local newspaper headlines indicate real intelligence in government might be a good idea. The latest article I found on it in local newspapers online is from 2019, but fortunately we have TechNewsTT around discussing it. Odd how that didn’t come up in a Google search of “AI Trinidad and Tobago”.

There are many parts of the world where artificial intelligence is completely off the radar as people try to simply get by.

The real threat of any form of artificial intelligence isn’t as tangible as nuclear war or pandemics to people. It’s how it will change our way(s) of life, how we’ll provide for families.

Even the media only points at that we want to see, since the revenue model is built around that. The odds are good that we have many blind spots that the media doesn’t show us even now, in a world where everyone who can afford it has a camera and the ability to share information with the world – but it gets lost in the shuffle of social media algorithms if it’s not something that is organically popular.

This is going to change societies around the globe. It’s going to change global society, where the access to large language models may become as important as the Internet itself was – and we had, and still have, digital divides.

Is the question who will be left behind, or who will survive? We’ve propped our civilizations up with all manner of things that are not withstanding the previous changes in technology, and this is a definite leap beyond that.

How do you see the next generations going about their lives? They will be looking for direction, and presently, I don’t know that we have any advice. That means they won’t be prepared.

But then, neither were we, really.