Cory Doctorow has said that AI is a bubble, which in some ways makes sense. After all, what is being marketed as artificial intelligence is pretty much a matter of statistics trained to give users what they want based on what they have wanted previously, collectively.
That, at least to me, isn’t really artificial intelligence as much as it’s math as smoke and mirrors giving the illusion of intelligence. That’s an opinion, of course, but when something you expect to give you what you want always gives you what you want, I’m not sure there is intelligence involved. It sounds more like subservience.
In fact, as a global society, we should probably be asking more of what we expect from artificial intelligences rather than having a handful of people dictate what comes next. Unfortunately, that’s not the way things seem to work with our global society.
The reality is, as Joe McKendrick pointed out, is that AI as marketed now will simply fade into the background, becoming invisible, which it already has. New problems arise from that, particularly around accountability.
Cory Doctorow is pretty much on the money despite being mocked in some places. It’s a marketing bubble about what has been marketed as artificial intelligence. What we have are useful tools at this point that can make some jobs obsolete, which says more about the jobs than anything else. If, for example, you think that a large language model can replace a human’s ability to communicate with other humans, you could be right to an extent – but virtual is not actual.
What will be left next year of all that has been marketed? The stuff behind the scenes, fading into the background, and which almost never is profitable itself.
Yet, where Cory Doctorow is a bit wrong is that now imaginations have been harnessed toward artificial intelligence, and maybe we will actually produce an intelligence that is an actual intelligence. Maybe, like little spores, they will help us expand our knowledge beyond ourselves, fitting them with sensors so that they can experience the world themselves rather than giving them regurgitated human knowledge.
After all, we create humans much more cheaply than we do artificial intelligences.
I think that might be a better thing to achieve, but… that’s just an opinion.
Thank you! Best description for what is passing as AI, I’ve read. Not fluent in the tech myself, but I’ve played with some of it, and each time, have been unimpressed – nothing original. I’m going to keep your description handy to refer to it.
I’m glad you like it. I’ve written about it from a more human perspective on RealityFragments.com ( such as https://realityfragments.com/2023/05/20/subjective-ai-results/ )
What they have basically done is taken algorithms like those that choose what you might be interested in on Netflix, Amazon, etc, and given it a thesaurus. That seems a bit understated, but in concept it’s true.
What we don’t see, and why it looks like such magic to us, is that we don’t see the amount of hardware needed to run it.
Consider how much water is needed to cool one of these centers: https://english.elpais.com/technology/2023-11-15/artificial-intelligence-guzzles-billions-of-liters-of-water.html
So to someone on a mobile phone or a computer, this all seems magical, but it’s just because we don’t truly understand the scale of computing necessary for it.