The Technological Singularity.

Everyone’s been tapping out on their keyboards – and perhaps having ChatGPT explain – the technological singularity, or artificial intelligence singularity, or the AI singularity, or… whatever it gets repackaged as next.

Wikipedia has a very thorough read on it that is worth at least skimming to understand the basic concepts. It starts with the simplest of beginnings.

The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization… [2][3]

Technological Singularity, Wikipedia, accessed 11 July 2023.

By that definition, we could say that the first agricultural revolution – the neolithic agricultural revolution – was a technological singularity. Agriculture, which many take for granted, is actually a technology, and one we’re still trying to make better with our other technologies. Computational agroecology is one example of this.

I have friends I worked with that went on to work with drone technology being applied to agriculture as well, circa 2015. Agricultural technology is still advancing, but the difference between agricultural technology and the technological singularity everyone’s writing about today is different in that we’re talking, basically, about a technology that has the capacity to become a runaway technology.

Runaway technology? When we get artificial intelligences doing surgery on their code to become more efficient and better at what they do, they will evolve in ways that we cannot predict but we can hope to influence. That’s the technological singularity that is the hot topic.

Since we can’t predict what will happen after such a singularity, speculating on it is largely a work of imagination. It can be really bad. It can be really good. But let’s get back to present problems and how they could impact a singularity.

…Alignment researchers worry about the King Midas problem: communicate a wish to an A.I. and you may get exactly what you ask for, which isn’t actually what you wanted. (In one famous thought experiment, someone asks an A.I. to maximize the production of paper clips, and the computer system takes over the world in a single-minded pursuit of that goal.) In what we might call the dog-treat problem, an A.I. that cares only about extrinsic rewards fails to pursue good outcomes for their own sake. (Holden Karnofsky, a co-C.E.O. of Open Philanthropy, a foundation whose concerns include A.I. alignment, asked me to imagine an algorithm that improves its performance on the basis of human feedback: it could learn to manipulate my perceptions instead of doing a good job.)..

Can We Stop Runaway A.I.?: Technologists warn about the dangers of the so-called singularity. But can anything actually be done to prevent it?, Matthew Hutson, The New Yorker, May 16, 2023

In essence, this is a ‘yes-man‘ problem in that a system gives us what we want because it’s trained to – much like the dog’s eyebrows evolved to give us the ‘puppy dog eyes’. We want to believe the dog really feels guilty, and the dog may feel guilty, but it also might just be signaling what it knows we want to see. Sort of like a child trying to give a parent the answer they want rather than the truth.

This is why I think ‘hallucinations’ of AI are examples of this. When prompted and it has no sensible response, rather than say, “I can’t give an answer”, it gives us some stuff that it thinks we might want to see. “No, I don’t know where the remote is, but look at this picture I drew!”

Now imagine that happening when an artificial intelligence that may communicate with the same words and grammar we do that does not share our view point, a view point that gives things meaning. What would be meaning to an artificial intelligence that doesn’t understand our perspective, only how to give us what we want rather than what we’re asking for.

Homer Simpson plagiarizes humanity in this regard. Homer might ask an AI about how to get Bart to do something, and the AI might produce donuts. “oOooh”, Homer croons, distracted, “Donuts!” It’s a red dot problem, as much responsibility on us by being distracted as it is for the AI (which we created) to ‘hallucinate’ and give Homer donuts.

But of course, we’ve got Ray Kurzweil predicting a future while he’s busy helping create it as a Director of Engineering for Google.

Of course, he could be right on this wonderful future that seems too good to be true – but it won’t be for everyone, given the status of the world. If my brain were connected to a computer, I’d probably have to wait for all the updates to install to get off the bed. And what happens if I don’t pay the subscription fee?

Because we can do something doesn’t always mean we should – the Titan is a brilliant example of this. I don’t think that many people will be lining up to plug a neural interface into their brain so that they can connect to the cloud. I think we’ll let the lab rats spend their money to beta test that for the rest of humanity.

The rest of humanity is beyond the moat for most technologists… and that’s why those of us who aren’t technologists, or who aren’t just technologists, should be talking about these problems.

The singularity is likely to happen. It may already have, with people only having attention spans of 47 seconds because of ‘smart’ technology, because when it comes to the singularity technologists generally only look at the progress of technology and the pros of it for humanity – but there has been a serious downside as well.

The conversation needs to be balanced better, and is probably going to be my next post here.

ChatGPT Migrations.

I haven’t really mentioned ebb and flow to data streams, but it’s not that different from what we see in nature. Birds migrate. Elephants migrate. Whales migrate.

Users migrate. Sure, they move from application/service to application/service, but during the day they are more likely to use certain software/services. Then we get into weekdays and weekends, with the holidays…

People use different stuff at different times. So ChatGPT has noted such a migration and it I found it mildly disturbing:

ChatGPT is losing users for the first time ever, and those users aren’t who you would expect. Traffic to ChatGPT’s website fell by 9.7% in June, according to estimates from Similarweb, a web analytics firm. The decline was steeper in the U.S., with a 10.3% month-on-month decline, and the number of unique visitors to ChatGPT also fell by 5.7% from the previous month.

One thing is clear to Francois Chollet, a software engineer and AI researcher at Google, who told Fortune over email that “one thing is sure: it’s not booming anymore.”

Chollet thinks he knows what’s going on: summer vacation. Instead of using ChatGPT for education-related activities, the engineer said on Twitter, kids are probably playing Minecraft or enjoying summer activities. Search interest over time for ChatGPT has steadily declined, while search interest for Minecraft has steadily increased, he pointed out. 

ChatGPT suddenly ‘isn’t booming anymore,’ Google A.I. researcher says—and kids are the big problem“, Fortune.com, Stephen Pastis.

It’s noted in the article that doing homework is what ranks 2nd of ChatGPT’s use. Personally, I’ve got a really bad history about doing homework, so I get it, but are we truly punishing people by giving them bad grades who say, “Nope, I didn’t do it” to those who have ChatGPT do it? Honesty is being penalized again?

Honesty, integrity – all those things Disney remakes stories about that we have kids sit down and watch gets penalized if they don’t have ChatGPT do their homework?

This isn’t like the calculator, which took away some of the drudgery of math. ChatGPT, prompted properly, can do an entire assignment in moments, paste that into a spreadsheet…

And we have just trained primates to copy, paste, and not learn anything while those that might want to learn actually are at risk of getting lower grades.

I thought school was bad in my day…

Political And AI Intrigue In Social Media.

I normally don’t follow politics because politics doesn’t really follow me – it tends to stalk me instead. Yet today, with social media in the headline, I paid attention – because it’s not just politics involved. There’s artificial intelligence as well, or what is accused of it.

From the first article:

A US federal judge has limited the Biden administration’s communications with social media companies which are aimed at moderating their content.

In a 155-page ruling on Tuesday, judge Terry Doughty barred White House officials and some government agencies from contacting firms over “content containing protected free speech”.

It is a victory for Republicans who have accused officials of censorship.

Democrats said the platforms have failed to address misinformation.

The case was one of the most closely-watched First Amendment battles in the US courts, sparking a debate over the government’s role in moderating content which it deemed to be false or harmful…

Biden officials must limit contact with social media firms“, BBC News, Annabelle Liang, 5th July, 2023.

By itself, it’s pretty damning for the Democrats, who like the Republicans, aren’t my favorite people in the world. It isn’t an either/or proposition, but it’s usually simplified to that so that both sides keep reading for advertising.

Now here’s the second article.

Evidence of potential human rights abuses may be lost after being deleted by tech companies, the BBC has found.

Platforms remove graphic videos, often using artificial intelligence – but footage that may help prosecutions can be taken down without being archived.

Meta and YouTube say they aim to balance their duties to bear witness and protect users from harmful content.

But Alan Rusbridger, who sits on Meta’s Oversight Board, says the industry has been “overcautious” in its moderation…

AI: War crimes evidence erased by social media platforms“, BBC Global Disinformation Team, Jack Goodman and Maria Korenyuk, 1 June 2023.

The artificial intelligence angle is from a month ago. The political angle dealing with Democrats and Republicans (oh my!) is today, because of the Federal Judge’s ruling. Both deal with content being removed on social media.

The algorithms on social media removing content related to Ukraine is not something new when it comes to Meta, because yours truly spent time in Facebook jail for posting an obvious parody of a Ukrainian tractor pulling the Moskov – before it was sunk. It labeled it as false information, which of course it was – it was a parody, and any gullible idiot who thought a Ukrainian tractor was pulling the Moskov deserves to be made fun of.

Clearly, the Moskov would need 2 Ukrainian tractors to pull it. See? Again, comedic.

These stories are connected in that the whole idea of ‘fake news’ and ‘trusted information’ has been politicized just about everywhere, and by politicized I also mean polarized. Even in Trinidad and Tobago, politicians use the phrases as if they are magical things one can pull out of… an orifice.

Algorithms, where they are blaming AI, are injecting their own bias by removing and leaving some content. Is some of this related to the ruling about Biden officials? I imagine it is. How much of a part of it is debatable – yet, during Covid, people were spreading a lot of fake news that worked against the public interests related to health.

The political angle had a Federal Court intervene. No such thing has happened with the artificial angle. That’s disturbing.

Looks like getting beyond Code 2.0 is becoming more important, or more late. What you see in the echo chambers of social media are just red dots, shining on the things others want us to see, and not necessarily the right things.

Beyond The Moat.

In the world we like to talk about since it reflects ourselves, technology weaves dendritically through our lives. Much of it is invisible to us in that it is taken for granted.

The wires overhead spark with Nikola Tesla’s brilliance, the water flowing in pipes dating all the way back 3000-4000 BC in the Indus Valley, the propagation of gas for cooking and heat and the automobiles we spend way too much time in.

Now, even internet access for many is taken for granted as social media platforms vie for timeshares of our lives, elbowing more and more from many by giving people what they want. Except Twitter, of course, but for the most part social media is the new Hotel California – you can check out any time you like, but you may never leave as long as people you interacted with are there.

This is why when I read Panic about overhyped AI risk could lead to the wrong kind of regulation, I wondered about what wasn’t written. It’s a very good article which underlines the necessity of asking the right questions to deal with regulation – and attempting to undercut some of the hype against it. Written by a machine learning expert, Divyansh Kaushik, and by Matt Korda, it reads really well about what I agree could be a bit too much backlash against the artificial intelligence technologies.

Yet their jobs are safe. In Artificial Extinction, I addressed much the same thing but not as an expert but as a layperson who sees the sparking wires, flowing water, cars stuck in traffic, and so on. It is not far-fetched to see that the impacts of artificial intelligence are beyond the scope of what experts on artificial intelligence think. It’s what they omit in the article that is what should be more prominent.

I’m not sure we’re asking the right questions.

The economics of jobs gets called into question as people who spent their lives doing something that can be replaced. This in turn affects a nation’s economy, which in turn affects the global economy. China wants to be a world leader in artificial intelligence by 2030 but given their population and history of human rights, one has to wonder what they’ll do with all those suddenly extra people.

Authoritarian governments could manipulate machine learning and deep learning to assure everyone’s on the same page in the same version of the same book quite easily, with a little tweaking. Why write propaganda when you can have a predictive text algorithm with a thesaurus of propaganda strapped to it’s chest? Maybe in certain parts of Taliban controlled Afghanistan, it will detect that the user is female and give it a different set of propaganda, telling the user to stay home and stop playing with keyboards.

Artificial Extinction, KnowProSE.com, May 31st 2023.

These concerns are not new, but they are made more plausible with artificial intelligence because who controls them controls much more than social media platforms. We have really no idea what they’re training the models on, where that data came from, and let’s face it – we’re not that great with who owns whose data. Henrietta Lacks immediately comes to mind.

My mother wrote a poem about me when I joined the Naval Nuclear Propulsion program, annoyingly pointing out that I had stored my socks in my toy box as a child and contrasting it with my thought at the time that science and technology can be used for good. She took great joy in reading it to audiences when I was present, and she wasn’t wrong to do so even as annoying as I found it.

To retain a semblance of balance between humanity and technology, we need to look at our own faults. We have not been so great about that, and we should evolve our humanity to keep pace with our technology. Those in charge of technology, be it social media or artificial intelligence, are far removed from the lives of people who use their products and services despite them making money from the lives of these very same people. It is not an insult, it is a matter of perception.

Sundar Pichai, CEO of Google, seemed cavalier about how artificial intelligence will impact the livelihoods of some. While we all stared at what was happening with the Titan, or wasn’t, the majority of people I knew were openly discussing what sorts of people would spend $250K US to go to a very dark place to go look at a broken ship. Extreme tourism, they call it, and it’s within the financial bracket of those who control technologies now. The people who go on such trips to space, or underwater, are privileged and in that privilege have no perspective on how the rest of the world gets by.

That’s the danger, but it’s not the danger to them and because they seem cavalier about the danger, it is a danger. These aren’t elected officials who are controlled through democracy, as much of a strange ride that is.

These are just people who sell stuff everybody buys, and who influence those who think themselves temporarily inconvenienced billionaires to support their endeavors.

It’s not good. It’s not really bad either. Yet we should be aspiring toward ‘better’.

Speaking for myself, I love the idea of artificial intelligence, but that love is not blind. There are serious impacts, and I agree that they aren’t the same as nuclear arms. Where nuclear arms can end societies quickly, how we use technology and even how many are ignorant of technology can cause something I consider worse: A slow and painful end of societies as we know them when we don’t seem to have any plans for the new society.

I’d feel a lot better about what experts in silos have to say if they… weren’t in silos, or in castles with moats protecting them from the impacts of what they are talking about. This is pretty big. Blue collar workers are under threat from smarter robots, white collar workers are under threat, and even the creative are wondering what comes next as they no longer are as needed for images, video, etc.

It is reasonable for a conversation that discusses these things to happen, and this almost always happens after things have happened.

We should be aspiring to do better than that. It’s not the way the world works now, and maybe it’s time we changed that. We likely won’t, but with every new technology, we should have a few people pointing that out in the hope that someone might listen.

We need leaders to understand what lays beyond the moat, and if they don’t, stop considering them leaders. That’s why the United States threw a tea party in Boston, and that’s why the United States is celebrating Independence Day today.

Happy Independence Day!

Trinidad and Tobago Takes Stab At Developer Hub.

Through TechNewsTT, I found out about Trinidad and Tobago’s new Developer Hub today. The name, “D’Hub”, is… well. We’ll just say that the name is likely why you should never name children by committee. That said, I went ahead and registered.

It is, after all, an attempt at something, and maybe it’s something I can contribute toward.

Now here’s the thing. My decades of experience really don’t mean much in a status oriented society like Trinidad and Tobago, and in that regard I didn’t expect much. Most of the jobs I got in the United States were through sheer experience, and experience stemming from the 1980s to present day. I’ve seen things that would make the new generations cringe.

A quick perusal of the site immediately got me annoyed at the constant expansion of the chat that blocks about 25% of the page – and to view a page, I have to close it every time. That’s annoying. I haven’t tried on my phone yet, but I imagine that experience will be akin to sticking a staple in my eyeball.

The present ‘challenges’ listed are interesting.

  • How to Automate the process of license renewal for the finalization with the CED? A way to streamline the process for liquor license renewal in order to reduce the time and number of persons visiting the office.
  • How can Applicants track the progress of their passport application? A passport application tracking system that gives real time updates and reduces applicant’s uncertainty
  • How to allow Access to S42 addresses on a platform. A platform that provides the public with easy access to their S42 addresses
  • How to use Personal devices to quantitatively measure noise pollution? An application or system that provides the public with an easy way to measure and report noise pollution.
  • How to provide Personalized information to the citizens on their health and healthier lifestyle? A mobile application that provides the public with tailored information on their health and pushes relevant advice.
D’Hub website, accessed 3 July 2023.

These are interesting challenges – but there’s no relevant information for them. The present liquor license and passport application processes are not exactly open, so this requires more information specific to these processes. How automated are they now? What software is used, if any?

The S42 addresses are intriguing. There is some information on that provided by TTPOST one search away, so that has potential as a solvable problem without too much issue. In fact, this should be the easiest challenge of them all for a mobile developer with the appropriate data from the user.

The noise pollution problem, one I’m well acquainted with, seems like over-engineering a problem that could be solved by other means – and there are already applications for that. The Laws regarding this are antiquated. What in other countries is a simple matter of calling the Police and saying, “that music is too loud” and having the police show up and ask them to turn it down… in Trinidad and Tobago, you get no immediate relief and no promise of it. So solving that challenge seems like just kicking the can. Loudly.

The last, providing personal information to citizens on their health and a healthier lifestyle is complicated at best. This is what Doctors are actually for, and this is what medical records are for.

I’m not sure that these challenges were well thought out, or anyone who has done a full software lifecycle has been permitted to affect.

It’s a stab in the right direction, though. It will be interesting to see where it goes. Clearly I’m not sold on it largely because I think they’re asking for solutions to the wrong problems.

There’s groundwork that needs to be done with all these challenges that isn’t easily done, some more difficult than others. That can lead to failure. Let’s hope this stab is flexible.