AI, Confirmation Bias and Our Own Insanity.

In unsurprising news, if you feed artificial intelligences the output of artificial intelligences they become a bit insane. I’d covered that before in Synthetic Recursion, which seemed pretty intuitive even before I wrote that, but scientists at Rice and Stanford University wrote a paper: “Self Consuming Generative Models Go MAD“.

So, we can say that’s been verified.

What’s even worse is apparently, Taylor Swift, Selena Gomez and Kim Kardashian have been saying things that they did not say – organized disinformation that has appeared all over, and if in vacuuming copyrighted content OpenAI’s ChatGPT might get infected. It’s not just artificial intelligences, output from people willfully misleading others can easily make it in.

Fortunately, I verified with ChatGPT4 and they got it right by… using Bing. I don’t use Bing. Why does ChatGPT4? Because of the same reason you can’t have a Coke with your Kentucky Fried Chicken.

While this time it has been caught – it started in November 2023 – it demonstrates how inaccuracies can crop up, how biases can be pushed, and how many problems we still have with misinformation without involving artificial intelligence. Every time we get anything on social media these days we have to fact check, and then we immediately get blowback about fact checking being flawed.

Why? It fits their confirmation biases. Given the way large language models are trained, we can say that we’re getting a lot right and yet we’re collectively also under a delusion that humanity has collected is right. What is true is that what we believe we know just hasn’t been proven wrong yet, with different thresholds for that varying from person to person.

With science, there’s a verification process, but science has been under fire increasingly because of who pays for the papers to be written and published. Academia has to be funded, and we don’t fund that as much as we should so others do sometimes, to their own ends. That’s why it’s important to read the papers, but not everyone has the time to do that. There is good science happening, and I’d like to think more of it is good than bad.

With AI tools, I imagine more papers will be written more quickly, which creates a larger problem. Maybe even an exponentially larger problem.

We accept a lot, and since we don’t know what’s in learning models, we don’t know what has been verified until we find things that aren’t. This means we need to be skeptical, just like when we use Wikipedia. There are some people who don’t like doing that footwork because what they see fits their confirmation biases.

Should we be surprised that our tools would have them too based on what we feed them?

It’s almost as if we need to make sure we’re feeding these learning models with things of value. That should come at a cost, because when we write, when we express ourselves in any way, it’s based largely on experience, sometimes hard won.

Meanwhile, artificial intelligence tools are being created to write summaries of books authors took years to write. Amazon is being flooded with them, apparently, and if I see another advertisement about microlearning on Facebook that seems to use these sort of precis notes, I might throw up through my monitor on someone else’s keyboards.

One thought on “AI, Confirmation Bias and Our Own Insanity.

Leave a Reply

Your email address will not be published. Required fields are marked *