Artificial Extinction.

The discussion regarding artificial intelligence continues, with the latest round of cautionary notes making the rounds. Media outlets are covering it, like CNBC’s “A.I. poses human extinction risk on par with nuclear war, Sam Altman and other tech leaders warn“.

Different versions of that article written by different organizations are all over right now, but it derives from one statement on artificial intelligence:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Center for AI Safety, Open Letter, undated.

It seems a bit much. Granted, depending on how we use AI we could be on the precipice of a variety of unpredictable catastrophes, and while pandemics and nuclear war definitely poses direct physical risks, artificial intelligence poses more indirect risks. I’d offer that can make it more dangerous.

In the context of what I’ve been writing about, we’re looking at what we feed our heads with. We’re looking at social media being gamed to cause confusion. These are dangerous things. Garbage in, Garbage out doesn’t just apply to computers – it applies to us.

More tangibly, though, it can adversely impact our way(s) of life. We talk about the jobs it will replace, with no real plan on how to employ those displaced. Do people want jobs? I think that’s the wrong question that we got stuck with in the old paint on society’s canvas. The more appropriate question is, “How will people survive?”, and that’s a question that we overlook because of the assumption that if people want to survive, they will want to work.

Is it corporate interest that is concerned about artificial intelligence? Likely not, they like building safe spaces for themselves. Sundar Pichai mentioned having more lawyers, yet a lawyer got himself into trouble when he used ChatGPT to write court filings:

“The Court is presented with an unprecedented circumstance,” Castel wrote in a previous order on May 4. “A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases… Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”

The filings included not only names of made-up cases but also a series of exhibits with “excerpts” from the bogus decisions. For example, the fake Varghese v. China Southern Airlines opinion cited several precedents that don’t exist.”

Lawyer cited 6 fake cases made up by ChatGPT; judge calls it “unprecedented”“, Jon Brodkin, ArsTechnica, May 30th 2023

It’s a good thing there are a few people out there relying on facts instead of artificial intelligence, or we might stray into a world of fiction where those that control the large language models and general artificial intelligences that will come later will create it.

Authoritarian governments could manipulate machine learning and deep learning to assure everyone’s on the same page in the same version of the same book quite easily, with a little tweaking. Why write propaganda when you can have a predictive text algorithm with a thesaurus of propaganda strapped to it’s chest? Maybe in certain parts of Taliban controlled Afghanistan, it will detect that the user is female and give it a different set of propaganda, telling the user to stay home and stop playing with keyboards.

It’s not hard to imagine all of this. It is a big deal, but in parts of the world like Trinidad and Tobago, you don’t see much about it because there’s no real artificial intelligence here, even as local newspaper headlines indicate real intelligence in government might be a good idea. The latest article I found on it in local newspapers online is from 2019, but fortunately we have TechNewsTT around discussing it. Odd how that didn’t come up in a Google search of “AI Trinidad and Tobago”.

There are many parts of the world where artificial intelligence is completely off the radar as people try to simply get by.

The real threat of any form of artificial intelligence isn’t as tangible as nuclear war or pandemics to people. It’s how it will change our way(s) of life, how we’ll provide for families.

Even the media only points at that we want to see, since the revenue model is built around that. The odds are good that we have many blind spots that the media doesn’t show us even now, in a world where everyone who can afford it has a camera and the ability to share information with the world – but it gets lost in the shuffle of social media algorithms if it’s not something that is organically popular.

This is going to change societies around the globe. It’s going to change global society, where the access to large language models may become as important as the Internet itself was – and we had, and still have, digital divides.

Is the question who will be left behind, or who will survive? We’ve propped our civilizations up with all manner of things that are not withstanding the previous changes in technology, and this is a definite leap beyond that.

How do you see the next generations going about their lives? They will be looking for direction, and presently, I don’t know that we have any advice. That means they won’t be prepared.

But then, neither were we, really.

Artifice Girl

With all that’s being marketed as artificial intelligence out there, this could be an interesting movie for at least some people who might like to see a merging of technology and humanity.

If you’re not appreciative of movies driven entirely by dialog, this is not your movie. There’s a bit of suspended disbelief too that may not sit well with some people, but it is a movie and like most things out of Hollywood, it’s pretty easy to find some flaws when compared with the real world.

Still. The idea of using a chatbot to catch pedophiles is not bad. It’s also not new.

If you’ve never heard of Negobot, or Lolita Chatbot, it became public in 2013 – about a decade before ‘Artifice Girl’, and if some of the dialog wasn’t borrowed from that Wikipedia page, I would be surprised.

Even so, it was a pretty good movie. Topical in how we are responsible for what we create, topical in how imperfect we are as a species, and topical about how we ourselves are reflected in our technology, like so many bits of broken glass on the floor sometimes.

Overall, I think it could be a fairly important movie at this time since everyone is agog over large language models such as ChatGPT.

See below for the trailer.

Rolling The Text Dice.

I haven’t published any science fiction other than some things here and there on my blogs, so I wouldn’t say that I’m an expert on writing Science Fiction – but I most certainly would consider myself an expert on reading Science Fiction. And it’s Sunday, and this is what I feel like writing about today.

Presently, there’s an inflection in technology where technology can write like a science fiction author (poorly), however banal one might think it is. There’s a recurring theme right now about people worried about creative endeavors being taken over, but as far as I know nothing very impressive has come out in this regard and I don’t expect it anytime soon.

…There’s one barrier that AI can’t cross, as of now. And that’s creating new science fictional concepts. The writer who can take in everything that’s going on now and speculate about the near future in any coherent way has the edge – for the moment. AIs based on current training models are essentially limited to rearranging the deck chairs on the Carnival Cruise’s Mardi Gras. What’s disappointing, is that’s exactly what most current human science fiction writers are doing too…

The Future of Science Fiction“, James Wallace Harris, 4/2/23

I fully endorse what he writes there. Most human science fiction writers these days seem to have Ye Olde CookeBooke of Formulaic Books which has been greatly disappointing when I explored bookstores, running my fingers gently across a creative cover of a book to flip it open, glance within, and see… the same formulae.

I’m not saying there isn’t good original science fiction out there. It’s just gotten so hard to find for me. The dwindling number of bookstores no longer seem to hold that many new ideas and thoughts, and Amazon, while convenient, lacks the tactile experience and probably has adversely affected any romantic relationships that blossomed over book choices in line. Don’t worry, I’m sure they’ll eventually create a dating app based on books you’ve purchased through Amazon, regardless of whether you read or understood them.

“We read, frequently if not unknowingly, in search of a mind more original than our own.”

Harold Bloom, How To Read and Why, 2000.

Harold Bloom’s quote has haunted me for at least a decade now, if not longer, because every time I go into a bookstore and see the same depressing stuff on the shelf that seems like corporate pulp fiction, I think that maybe it’s time I write something better. Something not formulaic.

Something that hits like Robert Heinlein’s Stranger in a Strange Land, an oddly poetic work considering how conservative the author was to create such an interestingly liberal book.

I’m starting a new project tomorrow, holding myself to landmarks and deadlines, so I’ve been roving around the Internet and reading up on things like these. The reason I’m writing about it here is because in it’s own way, it’s a system as hinted at here:

…Right now we have more science fiction of all kinds being produced than ever before. That production is going into overdrive with AI. And the percentage of it that’s readable and entertaining is increasing. That’s also part of the problem – there’s too much science fiction – at least for any one work to become widely known. Good science fiction novels have almost become a generic product line, in white boxes with black letter labeling…

The Future of Science Fiction“, James Wallace Harris, 4/2/23

Anything sustainable in the world of our creation has to be fiscally sustainable. This is what happened with Web 2.0, this is what has happened with publishing in general, and so, it’s happened to the just about everything else.


Saying that there is a lack of originality in Science Fiction is very much like saying that there’s no real originality in modern music. It’s subjective.

As originality decreases, banality increases.

Now here’s the rub. Since bayesian probability, the core of just about every algorithm out there offering you suggestions on everything from Netflix to Facebook, targets the median, the median decides what is original or not.

As the average person experiences more original works, they increasingly see suggestions as banal. Bayesian probability only works based on what has happened.

Beating that system for marketing requires a mix of what people have liked with what is original, all of which are moving targets on a very small asteroid to be hit from a spaceship with really bad targeting.

Roll the dice, I suppose. I’d rather we had an infinite improbability machine.

Education and AI

Having coffee with a friend yesterday, I was listening to her talk about meetings with teachers and encouraging her daughter to read at least 30 minutes a day. This is likely mundane conversation for many people, but the minutiae of it is interesting to me.

It is, after all, a system.

Everyone I have met or interacted with that was an educator (or librarian, they don’t get enough love) has been passionate about learning. Educational administrators I know have been passionate about test results. Test results and learning are mutually exclusive.

The tests are to prove to the masses that the educational system is working. They rarely demonstrate what one knows about anything, only that which was tested in tests that students are trained for. Learning, to me, is about not just having knowledge but applying that knowledge in practical ways. Deep understanding is not found through testing, it’s found through interaction- at least in my experience.

Sure, you can memorize equations and plug numbers into them, but unless you know more about the equations themselves, you’re just doing something a computer can do.

So let’s see what computers can do. I asked 3 different large language models, “Has the quality of education declined?”. Here are 3 different responses:

The question of whether the quality of education has declined is complex and can vary across different contexts and regions. It’s challenging to provide a definitive answer as education systems differ significantly worldwide, and numerous factors can influence quality.

While it’s true that some individuals and experts express concerns about declining educational quality, it is essential to consider various factors that impact education. These factors include changes in curriculum, teaching methods, funding, technological advancements, and societal shifts.

To assess the quality of education, it is necessary to examine specific indicators such as student performance on standardized tests, graduation rates, college enrollment rates, and feedback from educators, parents, and students. These indicators can provide insights into the overall effectiveness of education systems.

However, it is worth noting that generalizing the quality of education across all regions or institutions is not accurate, as there can be significant variations within and between countries. It is essential to consider local and contextual factors when evaluating educational quality.

ChatGPT.

As an AI language model, I cannot have an opinion. However, reports suggest that the quality of education varies by location and institution, with some improving and others declining. Factors like funding, teaching methodology, and teacher quality can also impact the quality of education. Additionally, the increasing demands of technology and the workforce may require updates to curriculum and teaching methods to effectively prepare students for the future.

DeepAI Chat.

There is no definitive answer to this question, as the quality of education can vary greatly depending on the country, school, and individual student. However, there are some general trends that suggest that the quality of education may have declined in recent years.

One trend is that students are spending less time on academic subjects and more time on extracurricular activities. This is likely due to a number of factors, including the increasing pressure to succeed in college and the growing popularity of sports and other activities.

Another trend is that teachers are facing increasing workloads and demands. This can make it difficult for them to provide students with the individual attention they need.

Finally, the curriculum in many schools is becoming more standardized, which can lead to a lack of creativity and innovation.

All of these factors can contribute to a decline in the quality of education. However, it is important to note that there are also many schools and teachers who are working hard to provide students with a high-quality education.

Here are some of the reasons why the quality of education may have declined:

  • The increasing focus on standardized testing has led to a narrowing of the curriculum and a focus on test-taking skills over critical thinking and problem-solving skills.
  • The shortage of qualified teachers has made it difficult to attract and retain the best teachers, and those who are teaching are often overworked and underpaid.
  • The increasing cost of education has made it more difficult for families to afford to send their children to college, which has led to a decline in the number of students who are graduating from college.
  • The rise of technology has made it easier for students to access information outside of the classroom, but it has also made it more difficult for them to focus and learn.

Despite these challenges, there are also many reasons to be optimistic about the future of education. There is a growing movement to reform the education system and to focus on teaching students the skills they need to succeed in the 21st century. There are also many dedicated teachers who are working hard to make a difference in the lives of their students. With hard work and dedication, we can reverse the decline in the quality of education and ensure that all students have the opportunity to succeed.

Bard.

I was a bit surprised at the differences in the responses and completely unsurprised about the commonality. I wouldn’t expect them to know since nobody knows, and all these large language models (LLM) do is access the same information the particular model is provided when prompted with a question. The reality is that we don’t know, therefore a LLM wouldn’t know.

If anyone thinks that’s not an issue, I think they may have had an education system land on their head in a most grotesque fashion.

We’re getting marketed “artificial intelligence”, machine learning, deep learning, neural networks, large language models and just about every cool phrase that causes nerdgasms.

When we consider the status of education systems for humans – human learning – we don’t even seem to have an idea of where we are. Further, students who were once copying and pasting from the Internet for assignments now get large language models doing their homework – if they can access them.

Something doesn’t seem quite right about that. Certainly, the technology progress is great, but I’m not sure we’re making smarter humans. I’m also not sure we’re making wiser humans.

What’s there to do? There seems to be at least some people thinking about the topic of education and AI, but as a society, are we too busy paying the bills and chasing red dots to have time for the future?

Whose Safe Space Is It Anyway?

Corporations have been creating “safe spaces” for themselves for a while, and while that can be read as either good or bad depending on how you feel about things, let’s just accept that as an objective truth.

Disney took things from the public domain and copyrighted their versions, making them as ubiquitous as their marketing – and then worked hard to close the door for others to do the same with their works which should have passed to the public domain.

The Sonny Bono Act, or Mickey Mouse Protection Act extended copyright to keep things from going into the public domain:

“…Following the Copyright Act of 1976, copyright would last for the life of the author plus 50 years (or the last surviving author), or 75 years from publication or 100 years from creation, whichever is shorter for a work of corporate authorship (works made for hire) and anonymous and pseudonymous works. The 1976 Act also increased the renewal term for works copyrighted before 1978 that had not already entered the public domain from 28 years to 47 years, giving a total term of 75 years.[3]

The 1998 Act extended these terms to life of the author plus 70 years and for works of corporate authorship to 120 years after creation or 95 years after publication, whichever end is earlier.[4] For works published before January 1, 1978, the 1998 act extended the renewal term from 47 years to 67 years, granting a total of 95 years.

This law effectively froze the advancement date of the public domain in the United States for works covered by the older fixed term copyright rules…”

Copyright Term Extension Act, Wikipedia, accessed on 16 May 2023.

Corporations acted in their own self-interest. Lawrence Lessig’s Free Culture was the first I read of it, but I don’t know that he was the first that noted it. They created a safe space for their copyrights while they had their roots in the public domain.

The world is full of other examples.

Bill Gates would dumpster dive and study code printouts, among other thing. As the famous founder of Microsoft, lots of people don’t seem to know that Microsoft didn’t start without understanding – and borrowing, if not buying – code from others. There’s nothing particularly shameful about it.

“The best way to prepare is to write programs, and to study great programs that other people have written. In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating systems.”

Bill Gates, Interview with Suzanne Lammers, 1986.

I think any programmer would disagree with the sentiment. Yet, the same Bill Gates who did that also wrote an open letter to hobbyists in 1976 that did not reflect that sentiment:

“…The feedback we have gotten from the hundreds of people who say they are using BASIC has all been positive. Two surprising things are apparent, however, 1) Most of these “users” never bought BASIC (less thank 10% of all Altair owners have bought BASIC), and 2) The amount of royalties we have received from sales to hobbyists makes the time spent on Altair BASIC worth less than $2 an hour.

Why is this? As the majority of hobbyists must be aware, most of you steal your software. Hardware must be paid for, but software is something to share. Who cares if the people who worked on it get paid?…”

An Open Letter To Hobbyists“, Bill Gates, cited in the New York Times archives.

Most people would say, “Well, he has a point.” And he did – in protecting a business model he was creating which kept people from being able to view the source code to learn from it. Was it a bad thing? A good thing? It doesn’t matter, it was a thing.

At the time, it was a bunch of scattered hobbyists before the Internet against a corporation that could afford lawyers and marketing. It was the wild, wild west of personal computing.

The above examples are 2 of many ‘negotiations‘ between the public and corporations, though with the increased political influence corporations have through lobbyism – and with money now being free speech – it’s hard to consider it a legitimate negotiation.

If you have 45 minutes, Pirates of Silicon Valley is worth watching.

The point is that corporations always do things like this, for better or worse and for better and worse. And with the emergence of artificial intelligence-like technologies, while the safe space of creators is being abstracted away into statistics. By extension, this also applies to the privacy of everyone’s data.

My thought is, the greater the wings, the more confident the bird should be where it roosts. If corporations are indeed made of individuals working toward common goals and are creating things, that’s great! But it should not come at the cost of competition, which is one of the founding principles of capitalism… which corporations tend to cite only when convenient.

“Do as we say. Not as we do.”

It’s All Statistics.

Everyone’s out to protect you online because everyone’s out to get you online. It’s a weird mix of people who want to use your data for profit and those who want to use your data for profit.

Planned obsolescence is something that has become ubiquitous in this technological age. It wasn’t always this way. Things used to be produced to last, not be replaced, and this is something to ponder before joining a line to get the latest iPhone, or when a software manufacturer shifts from a purchase model (where the license indicates you don’t really own the software sometimes!) to a subscription model.

The case has been made that software can’t be produced or maintained for free. The case has also been made, with less of a marketing department, that Free Software and Open Source software can do the same at a reduced cost. The negotiations are ongoing, but those who built their corporations from dumpster diving to read code printouts definitely have the upper hand.

Generally speaking, the average user doesn’t need complicated. In fact, the average user just wants a computer where they can browse the internet, write simple documents and spreadsheets. Corporations producing software on the scale of Microsoft, Google, Amazon, and so on don’t really care too much about what you need, they care about maintaining market share so that they can keep making money. Software has more features than the average user knows what to do with.

Where the business decisions are made, it’s about the bottom line. It’s oddly like something else we’re seeing a lot of lately. It seems unrelated, yet it’s pretty close to the same thing when you think about it.

“…This is true of the cat detector, and it is true of GPT-4 — the difference is a matter of the length and complexity of the output. The AI cannot distinguish between a right and wrong answer — it only can make a prediction of how likely a series of words is to be accepted as correct. That is why it must be considered the world’s most comprehensively informed bullshitter rather than an authority on any subject. It doesn’t even know it’s bullshitting you — it has been trained to produce a response that statistically resembles a correct answer, and it will say anything to improve that resemblance...

…It’s the same reason AI can produce a Monet-like painting that isn’t a Monet — all that matters is it has all the characteristics that cause people to identify a piece of artwork as his. Today’s AI approximates factual responses the way it would approximate “Water Lilies.”…”

The Great Pretender, Devin Coldeway, TechCrunch, April 3, 2023.

Abstracted away, the large language models aren’t that different than business teams – except, maybe, business teams could actually care about their consumers, but instead rely on statistics – just like large language models do. It’s a lot like the representations of Happy, Strong and Tough that I wrote about with AI generated images. It’s an approximation based on what the models and algorithms are trained on – which is… us.

There could be a soul to the Enterprise, I suppose, but maybe the Enterprise needs to remember where it comes from.

Trafficked++

It’s not a mistake that I was writing about practical communication earlier this morning, because on the Internet there are different rules if you’re concerned about traffic to your content.

There’s all manner of Search Engine Optimization stuff, from linking to similar content to being linked to from similar content, to using words and phrases commonly searched for… to… well, SEO is not as easy as it once was.

Writing with SEO in mind is not an easy task if one wants to have readable content. Sure, people might end up staring at your content because you’re a wizard at marketing your content through SEO and other means, but it doesn’t mean your content is actually useful. I can’t tell you how many times I’ve tried researching something and falling into what I call ‘ambiguity traps’.

For example, yesterday I was trying to figure out how to set the default volume on a Windows 10 machine when it boots so I don’t have to always turn down the sound. That got me finding things about everything but what I was searching for, and after interrogating a few search engines that gave me results about the drive volume instead of the sound volume, I realized that Microsoft didn’t seem to have the capability I was looking for.

A useful piece of content might have been, “Nope. You’re out of luck. You can’t do that.”. Of course, there’s the outside chance that there’s some secret setting hidden somewhere in the registry that makes it all possible, but I do not feel the need to sacrifice a farm animal and do the hokey pokey.

Generally speaking, on the Internet, it’s not as much about being useful as it is driving traffic to get advertising impressions. A few sites actually care about the content, and those sites aren’t commercial sites unless they’re hidden behind a paywall, which means their content likely doesn’t get indexed by the search engine bots.

And that’s what Web 2.0 gave us from the technological tropism. It doesn’t end there.

If you haven’t seen the BewareOfImages.com documentary (2016), just follow the link or click the image above to go see it. It’s 2 hours and 40 minutes long, but worth the watch so go grab a beverage and snacks when you do.

Somewhere during all of this, opinions gained traction over news, and then we got into fake news. If you watch the BewareOfImages documentary, you’ll see that this isn’t all that new either. It seems like a recurring theme.

All of this, quite possibly, makes it into the large language models that are so hyped right now.

What could possibly go wrong? In the broad strokes, that’s what some of us are worried about.

When Wendy Meets Karen.

Because the economy is so awesome, Wendy’s is working with Google on integrating an AI chat-bot into their drive-thrus. The italics might be an indicator of sarcasm.

Granted, I’m not a fan of fast food jobs, though when it comes to fast food burgers in the United States, I do lean toward Wendy’s. Even so, I have noted over the years that the Wendy’s I visited weren’t necessarily the best maintained areas to eat. That could be a factor of geography in Florida.

So cutting costs and decreasing lines would make sense, except… well, there are people who are working multiple jobs working in these fast food areas and making not that much to get yelled at by angry Karen’s who will demand to see the manager.

“…Penegor [Wendy’s Chief Executive] said the goal of the chatbot is to help reduce long lines from forming in the drive-thru lane, which could prompt some potential customers to go elsewhere. In my experience with most fast food joints, it’s not the long lines that turn customers away but rather, the slow pace and incorrect nature in which an order is prepared in the kitchen that’s the problem. Other establishments like Chick-fil-A and In-N-Out Burger figured this out long ago and can successfully manage long lines with efficiency…”

Shawn Knight, “It’s happening: AI chatbot to replace human order-takers at Wendy’s drive-thru“, TechSpot.com, May 9th 2023.

With the price of gas as it is just about anywhere in the world, the only time I go through a drive-thru is when there is nobody else in the drive-thru. Honestly, who but a blithering idiot would be in a drive-thru spending money on gas while waiting in a line – which you’re generally trapped in – waiting more than a few minutes.

I agree with Shawn Knight, too. The problem isn’t in taking the order, the problem is in producing the order at peak times in time for the line to continue moving and taking orders. Granted, Wendy’s may have done some metrics and come up with this, or at least I would hope that they have, but generally speaking taking the order is not the problem.

Well, at least not on the employee end.

The other side of the problem are the people who get to where they can make the order… and don’t know what they want. And that leads to how far away the menus are from where one orders. It’s nice to have the menu where you order, but if you’re in line it might help speed things up if you could see the menu before you get there. This theory doesn’t pan out when you walk inside to order, though, because people are… well, not that sharp.

In the end, some people will lose jobs, and they won’t be getting jobs as software engineers at Google.

The real fun will begin when the Chatbots meet Karen.

AI: Standing on the Shoulders of Technology, Seeking Humanity

“When the mob governs, man is ruled by ignorance; when the church governs, he is ruled by superstition; and when the state governs, he is ruled by fear. Before men can live together in harmony and understanding, ignorance must be transmuted into wisdom, superstition into an illumined faith, and fear into love.”

Manly P. Hall, The Secret Teachings of All Ages.

It’s almost impossible to keep up with all that is going on related to discussion on what’s being marketed as artificial intelligence, particularly with a lot of speculation on how it will impact our lives.

Since the late 1970s, we evolved technology from computers to personal computers to things we carry around that we still call ‘phones’ although their main purposes do not seem to revolve around voice contact. In that time, we’ve gone from having technology on a pedestal that few could reach to a pedestal most of humanity can reach.

It has been a strange journey so far. If we measure our progress by technology, we’ve been successful. That’s a lot like measuring your left foot with your right foot, though, assuming you are equally equipped. If we measure success fiscally and look at the economics of the world, a few people have gotten fairly rich at the expense of a lot of people. If we measure it in knowledge access, more people have access to knowledge than any other time on the planet – but it comes with a severe downside of a lot of misinformation out there.

We don’t really have a good measure of the impact of technology in our lives because we don’t seem to think that’s important outside of technology, yet we have had this odd tendency in my lifetime to measure progress with technology. At the end of my last session with my psychologist, she was talking about trying to go paperless in her office. She is not alone.

It’s 2023. Paperless offices was one of the technological promises made in the late 1980s. That spans about 3 decades. In that same period, it seems that the mob has increasingly governed, superstition has governed the mob, and the states have increasingly tried to govern. It seems as a whole, despite advances in science and technology, we, the mob, have become more ignorant, more superstitious and more fearful. What’s worse, our attention spans seem to have dropped to 47 seconds. Based on that, many people have already stopped reading because of ‘TLDR’.

Into all of this, we now have artificial intelligence to contend with:

…Some of the greatest minds in the field, such as Geoffrey Hinton, are speaking out against AI developments and calling for a pause in AI research. Earlier this week, Hinton left his AI work at Google, declaring that he was worried about misinformation, mass unemployment and future risks of a more destructive nature. Anecdotally, I know from talking to people working on the frontiers of AI, many other researchers are worried too…

HT Tech, “AI Experts Aren’t Always Right About AI

Counter to all of this, we have a human population that clearly are better at multiplying than math. Most people around the world are caught up in their day to day lives, working toward some form of success even as we are inundated with marketing, biased opinions parading around as news, all through the same way we are now connected to the world.

In fact, it’s the price we pay, it’s the best price Web 2.0 could negotiate, and if we are honest we will acknowledge that at best it is less than perfect. The price we pay for it is deeper than the cost we originally thought and may even think now. We’re still paying it and we’re not quite sure what we bought.

“We are stuck with technology when what we really want is just stuff that works.”

Douglas Adams, The Salmon of Doubt.

In the late 1980s, boosts in productivity were sold to the public as ‘having more time for the things you love’ and variations on that theme, but that isn’t really what happened. Boosts in productivity came with the focus in corporations so that the more you did, the more you had to do. Speaking for myself, everyone hired for 40 hour work weeks but demanded closer to 50. Sometimes more.

Technology marketing hasn’t been that great at keeping promises. I write that as someone who survived as a software engineer with various companies over the decades. Like so many things in life, the minutiae multiplied.

“…Generative AI will end poverty, they tell us. It will cure all disease. It will solve climate change. It will make our jobs more meaningful and exciting. It will unleash lives of leisure and contemplation, helping us reclaim the humanity we have lost to late capitalist mechanization. It will end loneliness. It will make our governments rational and responsive. These, I fear, are the real AI hallucinations and we have all been hearing them on a loop ever since Chat GPT launched at the end of last year…”

Naomi Klein, “AI Machines Aren’t ‘Hallucinating’. But Their Makers Are

There was a time when a software engineer had to go from collecting requirements to analysis to design to coding to testing to quality assurance to implementation. Now these are all done by teams. They may well all be done by versions of artificial intelligence in the future, but anyone who has dealt with clients first hand will tell you that clients are not that great at giving requirements, and that has been roled into development processes in various ways.

Then there is the media aspect, where we are all media tourists that are picking our social media adventures, creating our own narratives from what social media algorithms pick for us. In a lot of ways, we have an illusion of choice when what we really get are things that algorithms decide we want to see. That silent bias also includes content paywalled into oblivion, nevermind all that linguistic bias where we’re still discovering new biases.

Large Language Models like ChatGPT, called artificial intelligences with a degree of accuracy, have access to information that may or may not be the same that we may have in our virtual caves. They ‘learn’ faster, communicate faster and perhaps more effectively, but they lack one thing that would make them fail a real Turing test: Being human.

This is not to say that they cannot fake it convincingly by using Bayesian probability to stew our biases into something we want to read. We shouldn’t be too surprised, we put stuff in, we get stuff out, and the stuff we get out will look amazingly like stuff we put in. It is a step above a refrigerator in that we put in ingredients and we get cooked meals out, but just because a meal tastes good doesn’t mean it’s nutritious.

“We’re always searching, but now we have the illusion we’re finding it.”

Dylan Moran, “Dylan Moran on sobriety, his childhood, and the internet | The Weekly | ABC TV + iview

These stabs at humanity with technology are becoming increasingly impressive. Yet they are stabs, and potentially all that goes with stabs. The world limited to artificial intelligences can only make progress within the parameters and information that we give to them. They are limited, and they are as limited as we are, globally, biases and all. No real innovation happens beyond those parameters and information. It does not create new knowledge, it simply dresses up old knowledge in palatable ways very quickly, but what is palatable now may not be so next year. Or next month.

If we were dependent on artificial intelligences in the last century, we may not have had many of the discoveries we made. The key word, of course, is dependent. On the other hand, if we understood it’s limitations and incentivized humanity to add to this borgish collective of information, we may have made technological and scientific progress faster, but… would we have been able to keep up with it economically? Personally?

We’re there now. We’re busy listening to a lot of billionaires talk about artificial intelligences as if billionaires are vested in humanity. They’re not. We all know they’re not, some of us pretend they are. Their world view is very different. This does not mean that it’s wrong, but if we’re going to codify an artificial intelligence with opinions somehow, it seems we need more than billionaires and ‘experts’ in such conversations. I don’t know what the solution is, but I’m in good company.

The present systems we have are biased. It’s the nature of any system, and the first role of a sustainable system is making sure it can sustain itself. There are complicated issues related to intellectual property that can diminish new information being added to the pool balanced with economic systems that, in my opinion, should also be creating the possibility of a livelihood for those who do create and innovate not just in science and technology, but advance humanity in other ways.

I’m not sure what the answers are. I’m not even sure what the right questions are. I’m fairly certain the present large language models don’t have them because we have not had good questions and answers yet to problems affecting us as a species.

I do know that’s a conversation we should be having.

What do you think?

Normalized Vice, AI-pedia?

_web_wikiWhen I read, “AI is tearing Wikipedia apart“, I immediately recalled all the personal issues I had with the never-to-return-because-I-said-so page on myself. It’s long and involved, but the short story is about dealing with some pretty different ways we all think of Wikipedia, and the different sects of volunteers involved. Yes, there are sects, and I had a run-in with the deletionist sect because of a profile I didn’t create, but some journalist had.
It’s not pretty when you let loose people organizing as much information on a volunteer basis. When Jimmy Wales and I shared the same geography, we planned to get coffee sometime and we were both too busy to do it. I mentioned this to him, and he rightly said something to the effect that it’s for them to deal with. It was personal for me (how can a Wikipedia page not be so?), and what I did influence were some new rules on dealing with biographies of living people.

But yes, Wikipedia using a large language model? The biases… well, that’s just a headache to discuss. I posted the article on my personal Facebook page, where I have a few friends who are editors at Wikipedia, and they didn’t bite. One person did, however, point out that Vice.com, the publisher’s of that article, is headed for bankruptcy.

It looks like the normalization of Web 2.0 coinciding with the new disruption of large language models reminds me of dominos toppling onto each other. That’s an interesting, and peculiar twist.

An ebb of disruption, a new wave of disruption. Much of tech isn’t about tech.