Surprise: Virtual Isn’t Actual.

Anyone who has had a passing relationship with a dictionary may notice some sarcasm in the the title. Virtual, by definition, isn’t actual.

Of course, someone has to go about proving that and that has value. In the semantics of whether an artificial relationship is real or not, since ‘artificial’ itself is by definition made by humans. It’s easy to go down a path of thought where all relationships are artificial since they are made by humans, but that’s not really what we’re talking about at all.

We’re talking about human society, psychology, and the impact of relationships with artificial intelligences.

Early on, [Silicon Valley companies] discovered a good formula to keep people at their screens,” said Turkle. “It was to make users angry and then keep them with their own kind. That’s how you keep people at their screens, because when people are siloed, they can be stirred up into being even angrier at those with whom they disagree. Predictably, this formula undermines the conversational attitudes that nurture democracy, above all, tolerant listening.

“It’s easy to lose listening skills, especially listening to people who don’t share your opinions. Democracy works best if you can talk across differences by slowing down to hear someone else’s point of view. We need these skills to reclaim our communities, our democracies, and our shared common purpose.”

Why virtual isn’t actual, especially when it comes to friends“, Margaret Turkle, Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology in the Program in Science, Technology, and Society, quoted by Liz Mineo, The Harvard Gazette, December 5th 2023.

If that sounds familiar, it’s a recurring theme. Just last week in AI, Ethics and Us, I pointed to what Miguel Ángel Pérez Álvarez had written on the Spanish version of Wired in “IA: implicaciones éticas más allá de una herramienta tecnológica” which was in the same vein.

Turkle, giving a keynote, had more space to connect the dots and so pointed out that the algorithms Silicon Valley companies use are useful for them to keep all of attached to our screens – but I do think that’s a bit unfair since it’s technology companies, and while there’s a concentration in Silicon Valley, companies around the world are leveraging these algorithms all the time. And as more and more people are noting, it has broader impacts than what we do as individuals.

In fact, if you look at social networks like Facebook and whatever Musk decided to call Twitter next, you’ll find people in algorithmic caves, almost unable to tunnel their way out because they’re quite happy in that algorithmic cave. Within that little cave there is an echo chamber.

An actual echo chamber created by virtual means.

The Technological Singularity: A Roundup of Perspectives Outside Tech.

Yesterday I wrote about the technological singularity as espoused by positive singulitarians that are sharing their perspectives on such a singularity – and rebutted some of the problems with the laser pointer that they want us to focus on. Fairly or unfairly, they quote Ray Kurzweil a lot.

Generally speaking, they are in the artificial intelligence business and therefore they want to sell us a future as they have done in the past, much like the paperless office as I mentioned here.

There’s more to humanity than that, I would like to think, so I’d been reading up and considering other aspects of humanity that may have some things to say that are of weight to the context of the hypothetical technological singularity. I write ‘hypotherical‘ because any prediction is hypothetical, even when you’re tilting with marketing to assure it happens.

Yesterday, I got a little sidetracked with the issue of global economic disparity versus global poverty, which I’ve resolved not to solve because I don’t think it is meant to be solved or an economist would have already.

However, I found much that is being said outside the realms of the more pure technologists.

…The time for international political action has therefore arrived. Both AI-producer and non-AI-producer countries must come together to create an international organism of technological oversight, along with an international treaty in artificial intelligence setting forth basic ethical principles.   

The greatest risk of all is that humans might realize that AI singularity has taken place only when machines remove from their learning adaptations the flaw of their original design limiting their intelligence: human input. After all, AI singularity will be irreversible once machines realize what humans often forget: to err is human. 

Entering the singularity: Has AI reached the point of no return?“, The Hill (Technology, Opinion), by J. Mauricio Gaona , Opinion Contributor – 05/15/23

That is, of course, a major issue. Garbage in, garbage out. If you want less errors, every software engineer of worth knows that you want to minimize the capacity of the user to create more errors. That’s a logical thing to point out.

Psychology Today had an impressively balanced article, well worth the read.

“…What does worry me is a “second singularity.”

The second singularity is not just about computers getting more powerful, which they are, but the simultaneous reduction in expertise that we are seeing in many quarters. As organizations outsource decision making authority to machines, workers will have fewer opportunities to get smarter, which just encourages more outsourcing.

The second singularity is actually much closer to us in time than Kurzweil’s original notion of a singularity. It is a second singularity in deference to Kurzweil’s analysis, rather than for chronological reasons…”

The Second Singularity: Human expertise and the rise of AI.“, Gary Klein PhD, Psychology Today, December 3rd, 2019.

Given that the article is three and a half years old, it’s impressively descriptive and poignant for the conversation today, delving into nuanced points about expertise – some things are worth losing, some not. More people should read that article, it’s a fairly short read and well written, including suggestions on what we should do even now. It has definitely aged well.

Moving on, we get to an aspect of the economic perspective. An article on Forbes has some interesting questions, condensed below.

how will the potential of bioengineering capabilities re-define and re-design the way we produce raw materials?

how will the emerging potential of molecular manufacturing and self-replicating systems reverse the very process of globalization, as nations who own and control this technology will not need other nations as they can produce/manufacture anything they need or want in unlimited quantities?

How will blockchain based additive manufacturing create a participatory economy blurring the boundaries of national geography? How will a nation’s economy be influenced by digital manufacturing designs from anywhere and anyone?

How will nations deal with the likely collapse of the economic system in the coming years? Are they prepared?

The End Of Work: The Consequences Of An Economic Singularity“, Jayshree Pandya (née Bhatt), Ph.D., Forbes>Innovation>AI, Feb 17, 2019

Another article that has aged well at over 4 years old, because those questions are still to be answered. Interestingly, the author also mentions Risk Group LLC, where she is the CEO. The article lists her as a former contributor, and her author page on Forbes describes her as, “working passionately to define a new security centric operating system for humanity. Her efforts towards building a strategic security risk intelligence platform are to equip the global strategic security community with the tools and culture to collectively imagine the strategic security risks to our future and to define and design a new security centric operating system for the future of humanity.”

Definitely an interesting person, and in 2019 it seems she was well aware of the challenges.

“…The shape the future of humanity takes will be the result of complex, changing, challenging and competing for technological, political, social and economic forces. While some of these forces are known, there is a lot that is still not known and the speed at which the unknowns can unfold is difficult to predict. But unless we make a strong effort to make the unknowns, known, the outcome of this emerging battle between technological singularity and economic singularity seems to be just the beginning of social unrest and turmoil…”

The End Of Work: The Consequences Of An Economic Singularity“, Jayshree Pandya (née Bhatt), Ph.D., Forbes>Innovation>AI, Feb 17, 2019

It’s a shame Forbes paywalls their content, or more of us might have read it when it was written. This sort of article definitely needed a wider audience in 2019, I think.

Just a glance at RiskGroup LLC’s work makes it look like they have been busily working on these things. I’ll be looking their stuff over for the next few days, I expect.

In an interesting context of education and sociology, I came across an article that quotes Ethan Mollick, associate professor at the Wharton School at the University of Pennsylvania:

“The nature of jobs just changed fundamentally. The nature of how we educate, the nature of how teachers and students relate to work, all that has just changed too. Even if there’s no advancement in AI after today, that’s already happened,” said Mollick, an economic sociologist who studies and teaches innovation and entrepreneurship at Wharton.

“We are seeing, in controlled studies, improvements in performance for people doing job tasks with AI of between 20% and 80%. We’ve never seen numbers like that. The steam engine was 25%.”

Has new AI catapulted past singularity into unpredictability?“, Karen McGregor, University World News, 27 April 2023.

Things have been changing rapidly indeed. The PC Revolution was relatively slow, the Internet sped things up and then the mobile devices took things to a higher level. The comparison to the steam engine is pretty interesting.

Lastly, I’ll leave you with an anthropological paper that I found. It’s a lengthy read, so I’ll just put the abstract below and let you follow the link. It gets into collective consciousness.

The technological singularity is popularly envisioned as a point in time when (a) an explosion of growth in artificial intelligence (AI) leads to machines becoming smarter than humans in every capacity, even gaining consciousness in the process; or (b) humans become so integrated with AI that we could no longer be called human in the traditional sense. This article argues that the technological singularity does not represent a point in time but a process in the ongoing construction of a collective consciousness. Innovations from the earliest graphic representations to the present reduced the time it took to transmit information, reducing the cognitive space between individuals. The steady pace of innovations ultimately led to the communications satellite, fast-tracking this collective consciousness. The development of AI in the late 1960s has been the latest innovation in this process, increasing the speed of information while allowing individuals to shape events as they happen.

O’Lemmon, M. (2020). The Technological Singularity as the Emergence of a Collective Consciousness: An Anthropological Perspective. Bulletin of Science, Technology & Society, 40(1–2), 15–27. https://doi.org/10.1177/0270467620981000

That’s from 2020. Thus, most of the things I’ve found have been related to present issues yet were written some time ago, hidden in the silos of specialties beyond that of just technology.

There’s definitely a lot of food for thought out there when you cast a wider net beyond technologists.

It might be nice to get a better roundup, but I do have other writing I’m supposed to be working on.