A Tale of Two AIs.

2023 has been the year where artificial intelligences went from science fiction to technology possibility. It’s become so ubiquitous that on Christmas Eve, chatting with acquaintances and friends, people from all walks of life were talking about it.

I found it disappointing, honestly, because it was pretty clear I was talking about one sort of artificial intelligence where others were talking about another sort of artificial intelligence.

One, a lawyer, mentioned that she’d had lunch with an artificial intelligence expert. On listening and with a few questions, she was talking about what sounded to be a power user of ChatGPT. When I started talking about some of the things I write about here related to artificial intelligence, she said that they had not discussed all of that. Apparently I went a bit too far because she then asked, “But do you use the latest version of ChatGPT that you have to pay for like this expert does?”

Well, yes, I do. I don’t use it to write articles and if I do use ChatGPT to write something, I quote it. I have my own illusions, I don’t need to take credit for any hallucinations ChatGPT has. I also don’t want to incorporate strategic deception in my writing. To me, it’s a novelty and something I often find flaws with. I’m not going to beat up ChatGPT, it has usefulness, but the fact that I can use DALL-E to generate some images, like above, is helpful.

What disturbed me is that she thought that was what an artificial intelligence expert does. That seems a pretty low bar; I wouldn’t claim to be an artificial intelligence expert because I spend $20/month. I’m exploring it like many others and stepping back to look at problematic consequences, of which there are many. If we don’t acknowledge and deal with those, the rest doesn’t seem to matter as much.

That’s the trouble. Artificial intelligence, when discussed or written about, falls into two main categories that co-exist.

Marketed AI.

The most prominent one is the marketing hype right now, where we get ‘experts’ who for whatever reason are claiming a title for being power users of stabs at artificial intelligence. This is what I believe Cory Doctorow wrote about with respect to the ‘AI bubble’. It’s more about perception than reality, in my mind, and in some ways it can be good because it gets people to spend money so that hopefully those that collect it can do something more about the second category.

Yet it wasn’t long ago that people were selling snake oil. In the last decades, I’ve seen ‘website experts’ become ‘social media experts’, and now suddenly we have ‘artificial intelligence experts’.

Actual Artificial Intelligence.

The second category is actually artificial intelligence itself, which I believe we may be getting closer to. It’s where expert systems, which have been around since the 1970s, have made some quantum leaps. When I look at ChatGPT, as an example, I see an inference engine (the code) and the knowledge base which is processed from a learning model. That’s oversimplified, I know, and one can get into semantic arguments, but conceptually it’s pretty close to reality.

If you take a large language model like ChatGPT and feed it only medical information, it can diagnose based on symptoms a patient has. Feed it only information on a programming language like COBOL, it can probably write COBOL code pretty well. ChatGPT has a learning model that we don’t really know, and it is apparently pretty diverse, which allows us to do a lot of pretty interesting things besides generating silly images on blog posts. I’ve seen some code in JavaScript done this way, and I just generated some C++ code as a quick test with ChatGPT 4 that, yes, works and it does something better than most programmers do: it documents how it works.

I’d written about software engineers needing to evolve too with respect to artificial intelligence.

It has potential to revolutionize everything, all walks of life, and it’s going to be really messy because it will change jobs and even replace them. It will be something that will have psychological and sociological consequences, impacting governments and the ways we do… everything.

The Mix of Marketed vs. Actual

The argument could be made that without marketing, businesses would not make enough money for the continued expense of pushing the boundaries of artificial intelligence. Personally, I think this is true. The trouble is that marketing takes over what people believe artificial intelligence is. This goes with what Doctorow wrote about the bubble as well as what Joe McKendrick wrote about artificial intelligence fading into the background. When the phrase is over-used and misused in businesses, which seems to already be happening, the novelty wears off and the bubble pops in business.

That’s kind of what happened with social media and ‘social media experts’.

The marketing aspect, too, also causes people to worry about their own jobs, which maybe they don’t want but they want income because there are bills to pay in modern society. The fear of some is tangible, and with good reason. All the large language models use a very broad brush in answering those fears, as do the CEOs of the companies: We’ll just retrain everyone. There are people getting closer to retirement, and what companies have been doing to save money and improve their stock performance is finding reasons to ‘let people go’, so that comfort is spoken from on high with the same sensitivity as, “Let them eat cake“. It’s dismissive and ignores the reality people live in.

Finding the right balance is hard when there’s no control of the environment. People are talking about what bubbles leave behind, but they don’t talk as much about who they leave behind. Harvard Business Review predicted that the companies that get rid of jobs with artificial intelligence will eventually get left behind, but eventually can be a long time and can have some unpredictable economic consequences.

‘Eventually’ can be a long time.

The balance must be struck by the technology leaders in artificial intelligence, and that seems to be about as unlikely as it was with the dot-com boom. Maybe Chat-GPT 4 can help them out if they haven’t been feeding it enough of their own claims.

And no, you aren’t an ‘artificial intelligence expert’ if you are a paid user of artificial intelligence of any platform alone, just like buying a subscription to a medical journal doesn’t make you a medical professional.