I ran across an surprisingly well done article on the AIpocalypse thing, which I have written about before in ‘Artificial Extinction’, and it’s worth perusing.
“…In his testimony before Congress, Altman also said the potential for AI to be used to manipulate voters and target disinformation were among “my areas of greatest concern.”
Even in more ordinary use cases, however, there are concerns. The same tools have been called out for offering wrong answers to user prompts, outright “hallucinating” responses and potentially perpetuating racial and gender biases.”
“Forget about the AI apocalypse. The real dangers are already here“, CNN, Catherine Thorbecke, June 16th, 2023.
Now, let me be plain here. When they say an AI is hallucinating, that’s not really true. Saying it’s ‘bullshitting’ would be closer to true, but it’s not even really that. It’s a gap in the training data and algorithms made apparent by the prompt you give it. It’s not hallucinating. They’re effectively anthropomorphizing some algorithms strapped to a thesaurus when they say, ‘hallucinating’.
They’re trying to make you hallucinate, maybe, if we go by possible intentions.
“…Emily Bender, a professor at the University of Washington and director of its Computational Linguistics Laboratory, told CNN said some companies may want to divert attention from the bias baked into their data and also from concerning claims about how their systems are trained.
Bender cited intellectual property concerns with some of the data these systems are trained on as well as allegations of companies outsourcing the work of going through some of the worst parts of the training data to low-paid workers abroad.
“If the public and the regulators can be focused on these imaginary science fiction scenarios, then maybe these companies can get away with the data theft and exploitative practices for longer,” Bender told CNN…”
“Forget about the AI apocalypse. The real dangers are already here“, CNN, Catherine Thorbecke, June 16th, 2023.
We don’t like to talk about the intentions of people involved with these artificial intelligences and their machine learning. We don’t know what models are being used for the deep learning, and to cover that gap of trust, words like ‘hallucinating’ are much more sexy and dreamy than, “Well, it kinda blew a gasket there. We’ll see how we can patch that right up, but it can keep running while we do.”
I’m not saying that’s what’s happening, but it’s not a perspective that should be dismissed. There’s a lot at stake, after all, with artificial intelligence standing on the shoulders of humans, who are distantly related to kids who eat tide pods.
We ain’t perfick, and thus anything we create inherits that.
I think the last line of that CNN article sums it up nicely.
“…Ultimately, Bender put forward a simple question for the tech industry on AI: “If they honestly believe that this could be bringing about human extinction, then why not just stop?””
“Forget about the AI apocalypse. The real dangers are already here“, CNN, Catherine Thorbecke, June 16th, 2023.
That professor just cut to the quick in a way that had me smile. She just straight out said it.
And.
When we talk about biases, and I’ve written about bias a lot lately, we don’t see all that is biased alone. In an unrelated article, Barbara Kingsolver, the only 2 time winner of the Women’s Prize for fiction, drew my attention to one that I hadn’t considered in the context of deep learning training data.
“…She is also surprisingly angry. “I understand why rural people are so mad they want to blow up the system,” she says. “That contempt of urban culture for half the country. I feel like I’m an ambassador between these worlds, trying to explain that if you want to have a conversation you don’t start it with the words, ‘You idiot.’”…”
“Barbara Kingsolver: ‘Rural people are so angry they want to blow up the system’“, The Guardian, Lisa Allardice quoting Barbara Kingsolver, June 16th, 2023
She’s not wrong – and the bias is by omission, largely, on both the rural and urban sides (suburbia has a side too). So how does one deal with that in a training model for machine learning?
We’ve only scratched the surface, now haven’t we? Perhaps just scuffed.
4 thoughts on “The Ongoing Saga of the ‘AI’pocalypse”