AI From A Half Century Perspective.

When I read the Forbes article, “Why These 50 Over 50 Founders Say Beware Of AI ‘Hallucination‘”, I chuckled a bit. I happen to be over 50 and felt the ageism creeping onto my shoulders like a damp bathrobe in technology in the United States.

The kids don’t want to hear what the grizzled veterans of technology have to say, just as when we were kids we didn’t want to listen to those who had callouses from punch cards, and yet we respected them.

So Forbes found 50 founders over 50 and they said pretty much the same thing we were told when we started out: With great [computational] power comes great moral responsibility.

Some of us took that seriously. Some of us did not. If we’re honest and we look around a bit, maybe collectively we didn’t do so good with that. Very few of us have had the opportunity to do something with our abilities and skills largely because the market drives everything, what people want drives the market, and the marketers drive what people want. As a byproduct of that, people are still buying Microsoft 365 subscriptions even though they could be using LibreOffice at no costs.

Governments in the developing world have their budgets eaten that way, and that sells more hardware because Microsoft has done many good things but it also keeps requiring people to buy new computers. So they can sell more software. Meanwhile, those budgets are incompatible with the issues with the developing world – but if you do just one more update it will all be improved and the world will be a better place.

Colonel Sherman T. Potter, a character from M*A*S*H, who taught me the value of saying, “Horse Puckey!”

Horsepuckey.

People make bad decisions. Governments make worse decisions. Technology, driven by both, hasn’t been a moral compass.

The article is good. It hits the high points of much of my confirmation bias. It also does not jive with the realities of the world we live in because as much as people will want to say that the issues of artificial intelligence are technology issues, they are only symptoms of our human problem. If technology could wag the dog, maybe the world would be a better place – but technology does not and it’s hubris to think otherwise.

There are people who have used science and technology to make the world better. Nikola Tesla’s cheap sale of patents to Westinghouse gave us 3 phase electricity. Dr. Jonas Salk didn’t patent the polio vaccine. Sir Frederick G Banting , Charles H Best and JJR Macleod sold the patents to insulin to the University of Toronto for $1 each, with Banting famously saying, “Insulin does not belong to me, it belongs to the world.”

Most people don’t know these names. Most people these days have their ears filled with Musk and Zuckerburg who have not done anything even near what those important people did with science and technology. We do have Linus Torvalds, who made an operating system free to the world, but most people don’t know about him.

Computational power now has become such a beast untethered from the morality because our moral compasses don’t point to making the world a better place, apparently.

And this from someone over 50.

Learning models for these artificial intelligences come from publicly available data possibly without the permission of the authors. We don’t know what’s used other than vague stuff that gets thrown at us, even as people toss about the phrase ‘Median human’ as if it means anything other than the commodification of human ‘productivity’. As if we were horses and we generate horsepower. Few of you have seen a horse in person these days, I’m pretty sure. And these companies turn around and charge people money to use their artificial intelligences even as they look at replacing them in the workplace.

The family of Henrietta Lacks is still wondering about morality in medicine which, lest we forget, is just another branch of human technology – science which has become technology with stem cells commoditized.

Computational power giving great responsibility is true, yet if it is true, it does seem to exist in a vacuum.


The Median Human.

DeepAI’s generation from the prompt, “The Median Human”, using the 3D character API, created on the 1st of October 2023.

I’ve been keeping an eye on artificial intelligence related stuff and most of it isn’t as important to write about as people who write about seem to think, so I haven’t really written much about it lately.

Yet the concept of the ‘median human’ is. I asked DeepAI to generate an image of ‘The Median Human’. It’s not what I pictured, and I doubt it’s what you pictured.

This has been being batted around quite a bit lately because Sam Altman has been accused of using the phrase, “median human” a lot. Articles like, “Sam Altman Plans To Replace Normal People With AI” abound, with people even trying to cash in on articles about the use of ‘Median Human’.

Well, it’s a silly phrase. It’s so silly, in fact, that DeepAI gave this response:

That’s exactly the problem with the phrase. I wrote, “Deviants” with that in mind, because grouping a standard deviation would make more sense. We live in a society where normal and medians do not necessarily exist except as concepts. ‘Normal’, though, has a range and is a bit more workable but it’s too difficult to conceptualize.

So let’s take a step back. Take a breath. The people who are being criticized for using the phrase ‘median human’ are the people who have access to loads of data about everyone who is connected to a device. To them, to even be considered ‘a median human’ would require internet connectivity and probably the collection of data from social media, gaming and shopping websites, as well as what sort of articles one reads on the Internet. Probably even the pornography viewed, to give you a disturbing idea of how much information exists within our digital shadows1.

With all that data, it is that data alone that might be what people spouting about median humans might be basing the median on, and that’s a lot of data but the data is incomplete. Humanity exists beyond the Internet, despite what some might say, and humanity isn’t necessarily what we are but what we aspire to.

Can the average human trek around Mars? Nope. But an artificial intelligence could, trained properly. Can an artificial intelligence create a human life through sexual intercourse with another artificial intelligence? Well, there’s an interesting variation of the Turing test.

What the use of the phrase really does is simplify thinking about humans, and myself and others would argue that it over-simplifies humans. It treats humans as commodities, things that fit neatly in boxes so that tech people can try to talk about humans… when really, they probably shouldn’t.

The real issue is discussing ‘replacing humans’. I’m all for an artificial intelligence doing stuff that I don’t want to do, but since people are tied to economies through jobs, giving artificial intelligences jobs through reducing human jobs leaves us with the same problem we started with:

People making a living in a world where our civilizations have been substituted for the natural environment we evolved within, where we’re still fighting like chimpanzees over the trees with the most fruit but the fruit to us is money.

Maybe we should spend less time worrying about ‘median humans’ and more time thinking about being human and what that means now, how it has changed, and what the future of that will be. Instead, everyone’s busy selling snake oil.

1 Recommended read: “The Digital Person“, by Daniel Solove, a good starter for non-tech and tech people alike.