The article, “Introducing the AI Mirror Test, which very smart people keep failing“, hits some pretty good points on these large language models that we are lazily calling Artificial Intelligence. One of my favorites is this:
…What is important to remember is that chatbots are autocomplete tools. They’re systems trained on huge datasets of human text scraped from the web: on personal blogs, sci-fi short stories, forum discussions, movie reviews, social media diatribes, forgotten poems, antiquated textbooks, endless song lyrics, manifestos, journals, and more besides. These machines analyze this inventive, entertaining, motley aggregate and then try to recreate it. They are undeniably good at it and getting better, but mimicking speech does not make a computer sentient…
As I pointed out in a post on ChatGPT and large language models, such as ‘A Chat With GPT on AI‘, I recognized that it was meeting my cognitive bias. In that regard, I recognized some of myself in what I was getting back, not too different from when I was playing with Eliza in the 1980s with the only difference being that the bot has gotten better because it has access to more information than what the user types in. We were young, we dreamed, but tech wasn’t ready yet.
Of course it’s a mirror of what ourselves in that regard – but the author didn’t take it to the next step. As individuals we should be seeing ourselves in the output, but we should also understand that it’s also global society’s mirror as well, and all the relative good and relative bad that comes with it. We have biases in content based on language, on culture, on religion, and on much more. I imagine the Amish don’t care, but still they are part of humanity and we have a blind spot there, I’m certain, never-mind all the history that our society has erased and continues to erase, or has simply ignored.
Personally, I find it a great way to poll the known stores of humanity on what it’s biases believe, no matter how disturbing the results can be. And yet, we’re already likely diluting our own thoughts reflected back at us as marketers and bloggers (not mutually exclusive) churn content out of Large Language Models that they will eventually train on. That’s not something I’m comfortable with, and as usual, my problem isn’t so much technology as society, a rare thing for me to say when so much technology is poorly designed. Am I ‘victim shaming’?
When the victim is the one abusing themself, can it be victim shaming?
Our own echo chambers are rather shameless.
2 thoughts on “The Societal Mirror.”