Noam Chomsky, Ludditism and AI.

There’s been a lot of discussion about society and artificial intelligence. This includes the resurgence of mocking shares of Noam Chomsky’s opinion, “The False Promise of ChatGPT” where the only real criticism of it seems to have come from people asking ChatGPT what ChatGPT thought about Chomsky’s criticism.

I suppose human thought would be asking too much when it comes to responding to human thought in the age of generative artificial intelligences, and they kind of make Chomsky’s point.

Anyone who has been paying attention to what presently can be done with artificial intelligence should be concerned at some level. The older one is, the more one has to risk since the skills and experience can easily be flattened by these generative AIs.

So, let’s look at the crux of what Chomsky said:

“…Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge…”

Noam Chomsky, Ian Roberts and Jeffrey Watumull, “The False Promise of ChatGPT“, New York Times, March 8th, 2023.

Given that generative AI do their ‘magic’ by statistics, we see biases based on what their training models consist of. I caught DALL-E misrepresenting ragtime musicians recently. A simplified way of looking at it is that the training model is implicitly biased because art and literature on the Internet is also biased by what is available and what is not, as well as what it consists of. This could be a problem for mankind’s scientific endeavors, though there is space to also say that it may also allow us to connect things that were previously not as easy to connect. This is how I generally use Generative AI.

Debasing our ethics seems to be something we’re pretty good at by ourselves. Consider that to train ChatGPT, there are lawsuits about the use of copyrighted materials. When you consider how much people have put into their work over the years, it is at least questionably ethical to use people’s work without recompense. That’s ethical, and subjective, but the legal side of it has yet to be decided.

A definite issue to consider at this moment in history is how Israel is using AI to select targets and how well that is working out for civilians. It is a hot topic, but regardless of where one stands on the issue it is commonly agreed that avoiding civilian casualties should be done, except the most extreme and thus problematic positions on either side. Is it the AI’s fault when civilian Palestinians die? Where is the line for accountability drawn? We can’t even seem to get that settled without involving AI, and AI is involved.

It’s no secret that law and ethics don’t seem compatible often enough. Of course, generative AI use in Law so far has been problematic.

So is it really an issue where Noam Chomsky is being a Luddite? No, far from, he’s pointing out that we might have issues with a new technology and specifies what sort of issues he might see. In fact, I even asked ChatGPT, which you can see in the footnote. 1

And yet there are Luddites. In fact, I got to know more about Luddites by reading what a Luddite had to say about humanity’s remaining timeline. One even predicts that humanity will end in 2 years.

I’d say Noam Chomsky’s piece will stand up to the test of time, and maybe some of his stuff will be judged outside of the time her wrote it – the plagiarism aspect seems like it’s tied to that copyright issue.

The Luddites have consistently been wrong, and/or humanity has been consistently lucky.

However you look at it, a bit of skepticism is a good thing. Noam Chomsky gave us that. It’s worth understanding it for what it is. Implicitly, it’s partly calling for us to be better humans without a religious guise.

That gets some agreement from me.

  1. ↩︎

Leave a Reply

Your email address will not be published. Required fields are marked *