A Chat With ChatGPT On AI.

Interviewing Chat GPTIn my youth, I fiddled with Eliza. For decades I read books on AI, and thought deeply on the subject in many ways. In later decades, I would find myself playing with neural networks at home even while I did the drudgery of coding and debugging systems that made the term ‘software archaeology‘ very real to me. I’m no expert on this. I am, however, a well thought layperson.
I was writing an article about this here, and I had enough false starts that I decided to try a new approach. There’s just so many ways to discuss artificial intelligence, so many perspectives, and of course the din of voices shouting their thoughts on the Internet can be distracting.

It seems that every 5 minutes someone is posting something new about artificial intelligence, but it’s generally not true. They’re posting the same things about artificial intelligence in different ways, in the hope that some will be popular. It’s  the way of the Internet. It’s the way of Media. It is, with apologies to the Mandalorian, “The Way”.

I decided to have a chat with ChatGPT, and without further ado, here are the results. My prompts are in bold, the response in italics.

I was wondering what you thought about artificial intelligence and colonialism.

The relationship between artificial intelligence (AI) and colonialism is a complex and multi-faceted one. On one hand, AI has the potential to exacerbate existing colonial power dynamics by reinforcing existing biases and inequalities. For example, if AI systems are trained on data sets that are biased against certain groups or cultures, they may perpetuate these biases in their decision-making processes. Similarly, if AI is used to automate tasks that have traditionally been performed by workers in developing countries, it could exacerbate economic inequalities and deepen the divide between the global North and South.

On the other hand, AI also has the potential to challenge colonial power structures by enabling greater access to information and knowledge. For example, AI-powered translation technologies can help to break down language barriers and facilitate cross-cultural communication. Similarly, AI can be used to analyze and interpret historical data in new ways, helping to challenge dominant narratives and promote a more nuanced understanding of the past. Ultimately, the relationship between AI and colonialism is shaped by a complex interplay of factors, including the specific ways in which AI is developed, deployed, and regulated, as well as the broader social, economic, and political contexts in which it operates. It is important for researchers, policymakers, and practitioners to be mindful of these dynamics and work to ensure that AI is developed and deployed in ways that promote social justice, equity, and respect for human rights.

As you can see I leapt right into the deep end, and it reflected close to my own thoughts on what the answer should be, which feeds my own cognitive bias without revealing anything new to me. Understanding our own cognitive biases when dealing with ChatGPT and others like it seems important after my first ‘session’ with ChatGPT – so with every response, I’ll ask you, gentle reader, consider your own cognitive biases as you read this.

Should Artificial Intelligences be regulated, and if so, by whom?

Yes, I believe that artificial intelligence (AI) should be regulated to ensure that it is developed and deployed in ways that are safe, ethical, and aligned with societal values. Regulation is necessary because AI systems have the potential to cause harm, whether through unintentional errors, biases, or malicious use. In addition, the rapid pace of AI development means that existing legal and regulatory frameworks may not be sufficient to address the unique challenges posed by these technologies.

The question of who should regulate AI is a complex one that involves a range of stakeholders, including government bodies, industry organizations, and civil society groups. Ultimately, the responsibility for regulating AI should be shared across multiple actors, each with their own strengths and expertise. Governments play a critical role in setting legal and ethical standards for AI development and deployment, as well as in enforcing these standards through regulation and oversight. Industry organizations can contribute by developing best practices and standards for AI development, sharing knowledge and expertise, and promoting transparency and accountability in AI systems. Civil society groups can provide critical oversight and advocacy, helping to ensure that AI systems are aligned with societal values and are being used in the public interest.

Effective regulation of AI will require ongoing collaboration and dialogue among these different stakeholders, as well as ongoing monitoring and adaptation in response to new developments and challenges. It will also require a commitment to transparency, accountability, and public engagement, so that the benefits and risks of AI are fully understood and can be balanced appropriately.

This is also something that went with my cognitive bias, though with one exception: I personally tend to go against government regulation on many things because it’s hard to find things that governments around the world haven’t consistently screwed up somehow. Those big wheels can get messy, and it depends a lot on who is choosing the direction.

If we delve into it’s answer further, there’s a lot of ambiguity hidden under familiar words. “Safe, ethical, and aligned with societal values” sounds very good until you start deciding what is safe, what is ethical, and what is aligned with which societal values. In it’s own way, it’s putting the responsibility on the people who will be impacted by artificial intelligences in socioeconomic ways. Consider, in any system accused of democracy (we’ll get to that), if it works as it’s supposed to, and people are being put on the bread line because of artificial intelligences rather than those people who cross lines on maps… what do you think the response will be democratically? I didn’t take it in that direction, though, because I’m meeting a new bot and I’m playing with it.

Instead, I explored in a new direction.

What do you think of the quotation by Henry David Thoreau, where ‘men have become the tools of their tools’?

The quotation by Henry David Thoreau, where “men have become the tools of their tools,” speaks to the idea that technology can shape our behavior and way of life, sometimes in ways that we do not fully understand or intend. This idea reflects a broader concern that our relationship with technology is not always one of control and agency, but can be characterized by a sense of dependency and even subservience.

Thoreau’s quotation raises important questions about the role of technology in our lives, and the extent to which we are able to shape and control its impact. In some cases, our use of technology may reflect our values and priorities, and enable us to achieve our goals in more efficient and effective ways. However, in other cases, our use of technology may be driven by external forces, such as corporate interests or social pressures, and may have unintended consequences that we do not fully appreciate.

To avoid becoming “tools of our tools,” it is important to approach technology with a critical and reflective mindset, and to be mindful of its potential impact on our lives and society as a whole. This requires a willingness to question our assumptions and beliefs about technology, and to engage in ongoing dialogue and debate about its role in shaping our world. It also requires a commitment to developing ethical frameworks and policies that guide the development and use of technology, and that ensure that it serves the needs and aspirations of all members of society.

It’s drawn upon more than just facts here, and without the keyword of ‘artificial intelligence’ or ‘AI’, it gives a more broad result – which should be expected, and I did. The analysis given smacks of a lot of things written about Thoreau’s quote, which I imagine may be within the data it has drawn upon. In a world where people worry about ChatGPT doing our homework, we are ourselves it’s homework. How do I come to that conclusion? This was a response where it used, “our” inclusively. Go look again.

Am I impressed? Of course, but not in the ways that other people are shouting from the rooftops. I understand, at least in the broad strokes, how it works. It’s not sentient. But it draws upon the information we give it, it pulses with the money that it’s funded with, and it gives us a reflection of what we have to say.

Predictably.

And there’s more to come.

2 thoughts on “A Chat With ChatGPT On AI.

Leave a Reply

Your email address will not be published. Required fields are marked *