Critical Thinking In The Age Of AI.

Critical thinking is the ability to suspend judgement, and to consider evidence, observations and perspectives in order to form a judgement, requiring rational, skeptical and unbiased analysis and evaluation.

It’s can be difficult, particularly being unbiased, rational and skeptical in a world that seems to require responses from us increasingly quickly.

Joe Árvai, a psychologist who has done research on decision making, recently wrote an article about critical thinking and artificial intelligence.

“…my own research as a psychologist who studies how people make decisions leads me to believe that all these risks are overshadowed by an even more corrupting, though largely invisible, threat. That is, AI is mere keystrokes away from making people even less disciplined and skilled when it comes to thoughtful decisions.”

The hidden risk of letting AI decide – losing the skills to choose for ourselves‘”, Joe Árvai, TheConversation, April 12, 2024

It’s a good article, well worth the read, and it’s in the vein of what I have been writing recently about ant mills and social media. Web 2.0 was built on commerce which was built on marketing. Good marketing is about persuasion (a product or service is good for the consumer), bad marketing is about manipulation (where a product or service is not good for the consumer). It’s hard to tell the difference between the two.

Inputs and Outputs.

We don’t know exactly how much of Web 2.0 was shoveled into the engines of generative AI learning models, but we do know that chatbots and generative AI have become considered more persuasive than humans. In fact, ChatGPT 4 is presently considered 82% more persuasive than humans, as I mentioned in my first AI roundup.

This should at least be a little disturbing, particularly when there are already sites telling people how to get GPT4 to create more persuasive content, such as this one, and yet the key difference between persuasion and manipulation is whether it’s good for the consumer of the information or not – a key problem with fake news.

Worse, we have all seen products and services that had brilliant marketing but were not good products or services. If you have a bunch of stuff sitting and collecting dust, you fell victim to marketing, and arguably, manipulation rather than persuasion.

It’s not difficult to see that the marketing of AI itself could be persuasive or manipulative. If you had a tool that could persuade people they need the tool, wouldn’t you use it? Of course you would. Do they need it? Ultimately, that’s up to the consumers, but if they in turn are generating AI content that feeds the learning models in what is known as synthetic data, it creates it’s own problems.

Critical Thought

Before generative AI became mainstream, we saw issues with people sharing fake news stories because they had catchy headlines and fed a confirmation bias. A bit of critical thought applied could have avoided much of that, but it still remained a problem. Web 2.0 to present has always been about getting eyes on content quickly so advertising impressions increased, and some people were more ethical about that than others.

Most people don’t really understand their own biases, but social media companies implicitly do – we tell them with our every click, our every scroll.

This is compounded by the scientific evidence that attention spans are shrinking. On average, based on research, the new average attention span is 47 seconds. That’s not a lot of time to do critical thinking before liking or sharing something.

While there’s no real evidence that there is more or less critical thought that could be found, the diminished average attention span is a solid indicator that on average, people are using less critical thought.

“…Consider how people approach many important decisions today. Humans are well known for being prone to a wide range of biases because we tend to be frugal when it comes to expending mental energy. This frugality leads people to like it when seemingly good or trustworthy decisions are made for them. And we are social animals who tend to value the security and acceptance of their communities more than they might value their own autonomy.

Add AI to the mix and the result is a dangerous feedback loop: The data that AI is mining to fuel its algorithms is made up of people’s biased decisions that also reflect the pressure of conformity instead of the wisdom of critical reasoning. But because people like having decisions made for them, they tend to accept these bad decisions and move on to the next one. In the end, neither we nor AI end up the wiser…”

The hidden risk of letting AI decide – losing the skills to choose for ourselves‘”, Joe Árvai, TheConversation, April 12, 2024

In an age of generative artificial intelligence that is here to stay, it’s paramount that we understand ourselves better as individuals and collectively so that we can make thoughtful decisions.

Surprise: Virtual Isn’t Actual.

Anyone who has had a passing relationship with a dictionary may notice some sarcasm in the the title. Virtual, by definition, isn’t actual.

Of course, someone has to go about proving that and that has value. In the semantics of whether an artificial relationship is real or not, since ‘artificial’ itself is by definition made by humans. It’s easy to go down a path of thought where all relationships are artificial since they are made by humans, but that’s not really what we’re talking about at all.

We’re talking about human society, psychology, and the impact of relationships with artificial intelligences.

Early on, [Silicon Valley companies] discovered a good formula to keep people at their screens,” said Turkle. “It was to make users angry and then keep them with their own kind. That’s how you keep people at their screens, because when people are siloed, they can be stirred up into being even angrier at those with whom they disagree. Predictably, this formula undermines the conversational attitudes that nurture democracy, above all, tolerant listening.

“It’s easy to lose listening skills, especially listening to people who don’t share your opinions. Democracy works best if you can talk across differences by slowing down to hear someone else’s point of view. We need these skills to reclaim our communities, our democracies, and our shared common purpose.”

Why virtual isn’t actual, especially when it comes to friends“, Margaret Turkle, Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology in the Program in Science, Technology, and Society, quoted by Liz Mineo, The Harvard Gazette, December 5th 2023.

If that sounds familiar, it’s a recurring theme. Just last week in AI, Ethics and Us, I pointed to what Miguel Ángel Pérez Álvarez had written on the Spanish version of Wired in “IA: implicaciones éticas más allá de una herramienta tecnológica” which was in the same vein.

Turkle, giving a keynote, had more space to connect the dots and so pointed out that the algorithms Silicon Valley companies use are useful for them to keep all of attached to our screens – but I do think that’s a bit unfair since it’s technology companies, and while there’s a concentration in Silicon Valley, companies around the world are leveraging these algorithms all the time. And as more and more people are noting, it has broader impacts than what we do as individuals.

In fact, if you look at social networks like Facebook and whatever Musk decided to call Twitter next, you’ll find people in algorithmic caves, almost unable to tunnel their way out because they’re quite happy in that algorithmic cave. Within that little cave there is an echo chamber.

An actual echo chamber created by virtual means.

The Technological Singularity: A Roundup of Perspectives Outside Tech.

Yesterday I wrote about the technological singularity as espoused by positive singulitarians that are sharing their perspectives on such a singularity – and rebutted some of the problems with the laser pointer that they want us to focus on. Fairly or unfairly, they quote Ray Kurzweil a lot.

Generally speaking, they are in the artificial intelligence business and therefore they want to sell us a future as they have done in the past, much like the paperless office as I mentioned here.

There’s more to humanity than that, I would like to think, so I’d been reading up and considering other aspects of humanity that may have some things to say that are of weight to the context of the hypothetical technological singularity. I write ‘hypotherical‘ because any prediction is hypothetical, even when you’re tilting with marketing to assure it happens.

Yesterday, I got a little sidetracked with the issue of global economic disparity versus global poverty, which I’ve resolved not to solve because I don’t think it is meant to be solved or an economist would have already.

However, I found much that is being said outside the realms of the more pure technologists.

…The time for international political action has therefore arrived. Both AI-producer and non-AI-producer countries must come together to create an international organism of technological oversight, along with an international treaty in artificial intelligence setting forth basic ethical principles.   

The greatest risk of all is that humans might realize that AI singularity has taken place only when machines remove from their learning adaptations the flaw of their original design limiting their intelligence: human input. After all, AI singularity will be irreversible once machines realize what humans often forget: to err is human. 

Entering the singularity: Has AI reached the point of no return?“, The Hill (Technology, Opinion), by J. Mauricio Gaona , Opinion Contributor – 05/15/23

That is, of course, a major issue. Garbage in, garbage out. If you want less errors, every software engineer of worth knows that you want to minimize the capacity of the user to create more errors. That’s a logical thing to point out.

Psychology Today had an impressively balanced article, well worth the read.

“…What does worry me is a “second singularity.”

The second singularity is not just about computers getting more powerful, which they are, but the simultaneous reduction in expertise that we are seeing in many quarters. As organizations outsource decision making authority to machines, workers will have fewer opportunities to get smarter, which just encourages more outsourcing.

The second singularity is actually much closer to us in time than Kurzweil’s original notion of a singularity. It is a second singularity in deference to Kurzweil’s analysis, rather than for chronological reasons…”

The Second Singularity: Human expertise and the rise of AI.“, Gary Klein PhD, Psychology Today, December 3rd, 2019.

Given that the article is three and a half years old, it’s impressively descriptive and poignant for the conversation today, delving into nuanced points about expertise – some things are worth losing, some not. More people should read that article, it’s a fairly short read and well written, including suggestions on what we should do even now. It has definitely aged well.

Moving on, we get to an aspect of the economic perspective. An article on Forbes has some interesting questions, condensed below.

how will the potential of bioengineering capabilities re-define and re-design the way we produce raw materials?

how will the emerging potential of molecular manufacturing and self-replicating systems reverse the very process of globalization, as nations who own and control this technology will not need other nations as they can produce/manufacture anything they need or want in unlimited quantities?

How will blockchain based additive manufacturing create a participatory economy blurring the boundaries of national geography? How will a nation’s economy be influenced by digital manufacturing designs from anywhere and anyone?

How will nations deal with the likely collapse of the economic system in the coming years? Are they prepared?

The End Of Work: The Consequences Of An Economic Singularity“, Jayshree Pandya (née Bhatt), Ph.D., Forbes>Innovation>AI, Feb 17, 2019

Another article that has aged well at over 4 years old, because those questions are still to be answered. Interestingly, the author also mentions Risk Group LLC, where she is the CEO. The article lists her as a former contributor, and her author page on Forbes describes her as, “working passionately to define a new security centric operating system for humanity. Her efforts towards building a strategic security risk intelligence platform are to equip the global strategic security community with the tools and culture to collectively imagine the strategic security risks to our future and to define and design a new security centric operating system for the future of humanity.”

Definitely an interesting person, and in 2019 it seems she was well aware of the challenges.

“…The shape the future of humanity takes will be the result of complex, changing, challenging and competing for technological, political, social and economic forces. While some of these forces are known, there is a lot that is still not known and the speed at which the unknowns can unfold is difficult to predict. But unless we make a strong effort to make the unknowns, known, the outcome of this emerging battle between technological singularity and economic singularity seems to be just the beginning of social unrest and turmoil…”

The End Of Work: The Consequences Of An Economic Singularity“, Jayshree Pandya (née Bhatt), Ph.D., Forbes>Innovation>AI, Feb 17, 2019

It’s a shame Forbes paywalls their content, or more of us might have read it when it was written. This sort of article definitely needed a wider audience in 2019, I think.

Just a glance at RiskGroup LLC’s work makes it look like they have been busily working on these things. I’ll be looking their stuff over for the next few days, I expect.

In an interesting context of education and sociology, I came across an article that quotes Ethan Mollick, associate professor at the Wharton School at the University of Pennsylvania:

“The nature of jobs just changed fundamentally. The nature of how we educate, the nature of how teachers and students relate to work, all that has just changed too. Even if there’s no advancement in AI after today, that’s already happened,” said Mollick, an economic sociologist who studies and teaches innovation and entrepreneurship at Wharton.

“We are seeing, in controlled studies, improvements in performance for people doing job tasks with AI of between 20% and 80%. We’ve never seen numbers like that. The steam engine was 25%.”

Has new AI catapulted past singularity into unpredictability?“, Karen McGregor, University World News, 27 April 2023.

Things have been changing rapidly indeed. The PC Revolution was relatively slow, the Internet sped things up and then the mobile devices took things to a higher level. The comparison to the steam engine is pretty interesting.

Lastly, I’ll leave you with an anthropological paper that I found. It’s a lengthy read, so I’ll just put the abstract below and let you follow the link. It gets into collective consciousness.

The technological singularity is popularly envisioned as a point in time when (a) an explosion of growth in artificial intelligence (AI) leads to machines becoming smarter than humans in every capacity, even gaining consciousness in the process; or (b) humans become so integrated with AI that we could no longer be called human in the traditional sense. This article argues that the technological singularity does not represent a point in time but a process in the ongoing construction of a collective consciousness. Innovations from the earliest graphic representations to the present reduced the time it took to transmit information, reducing the cognitive space between individuals. The steady pace of innovations ultimately led to the communications satellite, fast-tracking this collective consciousness. The development of AI in the late 1960s has been the latest innovation in this process, increasing the speed of information while allowing individuals to shape events as they happen.

O’Lemmon, M. (2020). The Technological Singularity as the Emergence of a Collective Consciousness: An Anthropological Perspective. Bulletin of Science, Technology & Society, 40(1–2), 15–27. https://doi.org/10.1177/0270467620981000

That’s from 2020. Thus, most of the things I’ve found have been related to present issues yet were written some time ago, hidden in the silos of specialties beyond that of just technology.

There’s definitely a lot of food for thought out there when you cast a wider net beyond technologists.

It might be nice to get a better roundup, but I do have other writing I’m supposed to be working on.

The Need For A Vacation.

Sunrise, Batteaux Bay, TobagoI took some time off – got out of the new home, got away from the old problems and the old thoughts. There were times that I took some time for myself, and those who know me well will say that it’s actually rare for me to not be alone somewhere, but it’s not quite the same.

There’s a need to be elsewhere, physically, in a completely different environment. Over the decades, I count two vacations where I was able to do that, and this was the second one.

That should strike people as peculiar – I mean, software engineers used to make decent money, a few still do, but over the years it hasn’t always been a matter of having the money as much as the time. It’s also a matter in the United States that has people writing articles, such as , “Why America has Become The No Vacation Nation“.

There have been life changes for me recently with work and living that have allowed me some time to reflect on ways forward – something I worked hard and long for. I did, disappearing and unplugging for the most part away from almost everyone. For 10 days was ‘offline’. This gave me time to think about things, something that I’ll write more about on RealityFragments.com.

The point here is that I had no idea how necessary it was until I was away and elsewhere, apart more than usual, and able to process a lot of things that I had not been able to before.

Over the course of our lives, and the smaller subset of our lives that we call careers, we start on many different paths and sometimes stay on them even when they are no longer necessary. We might do things in certain ways because of old plans, or old circumstances – abandoned, or gone. And while we are doing those things, we completely miss the things that might be hitting us over the heads in our desperate clawing toward a future that a younger version of ourselves once wanted, once needed…

The pressures of life, through our circumstance or even those we create for ourselves, have the capacity to overwhelm us and work against us.

A few days won’t do. Long weekends are meaningless. Over-scheduled insanity is just work in a different guise, that’s not a vacation.

Nature reclaims things.

We all need time and space for a real reflection, and if someone asked me what I regret in my life, it would be that I have been poor about giving myself that time.

Time where I could take my time and plan the picture above. Time to tie a string to a waterproof camera and just throw it in the ocean off a dock for an entire morning. Time to walk around and be surprised by what drops in your lap.

Time.