Artificial Extinction.

The discussion regarding artificial intelligence continues, with the latest round of cautionary notes making the rounds. Media outlets are covering it, like CNBC’s “A.I. poses human extinction risk on par with nuclear war, Sam Altman and other tech leaders warn“.

Different versions of that article written by different organizations are all over right now, but it derives from one statement on artificial intelligence:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Center for AI Safety, Open Letter, undated.

It seems a bit much. Granted, depending on how we use AI we could be on the precipice of a variety of unpredictable catastrophes, and while pandemics and nuclear war definitely poses direct physical risks, artificial intelligence poses more indirect risks. I’d offer that can make it more dangerous.

In the context of what I’ve been writing about, we’re looking at what we feed our heads with. We’re looking at social media being gamed to cause confusion. These are dangerous things. Garbage in, Garbage out doesn’t just apply to computers – it applies to us.

More tangibly, though, it can adversely impact our way(s) of life. We talk about the jobs it will replace, with no real plan on how to employ those displaced. Do people want jobs? I think that’s the wrong question that we got stuck with in the old paint on society’s canvas. The more appropriate question is, “How will people survive?”, and that’s a question that we overlook because of the assumption that if people want to survive, they will want to work.

Is it corporate interest that is concerned about artificial intelligence? Likely not, they like building safe spaces for themselves. Sundar Pichai mentioned having more lawyers, yet a lawyer got himself into trouble when he used ChatGPT to write court filings:

“The Court is presented with an unprecedented circumstance,” Castel wrote in a previous order on May 4. “A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases… Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”

The filings included not only names of made-up cases but also a series of exhibits with “excerpts” from the bogus decisions. For example, the fake Varghese v. China Southern Airlines opinion cited several precedents that don’t exist.”

Lawyer cited 6 fake cases made up by ChatGPT; judge calls it “unprecedented”“, Jon Brodkin, ArsTechnica, May 30th 2023

It’s a good thing there are a few people out there relying on facts instead of artificial intelligence, or we might stray into a world of fiction where those that control the large language models and general artificial intelligences that will come later will create it.

Authoritarian governments could manipulate machine learning and deep learning to assure everyone’s on the same page in the same version of the same book quite easily, with a little tweaking. Why write propaganda when you can have a predictive text algorithm with a thesaurus of propaganda strapped to it’s chest? Maybe in certain parts of Taliban controlled Afghanistan, it will detect that the user is female and give it a different set of propaganda, telling the user to stay home and stop playing with keyboards.

It’s not hard to imagine all of this. It is a big deal, but in parts of the world like Trinidad and Tobago, you don’t see much about it because there’s no real artificial intelligence here, even as local newspaper headlines indicate real intelligence in government might be a good idea. The latest article I found on it in local newspapers online is from 2019, but fortunately we have TechNewsTT around discussing it. Odd how that didn’t come up in a Google search of “AI Trinidad and Tobago”.

There are many parts of the world where artificial intelligence is completely off the radar as people try to simply get by.

The real threat of any form of artificial intelligence isn’t as tangible as nuclear war or pandemics to people. It’s how it will change our way(s) of life, how we’ll provide for families.

Even the media only points at that we want to see, since the revenue model is built around that. The odds are good that we have many blind spots that the media doesn’t show us even now, in a world where everyone who can afford it has a camera and the ability to share information with the world – but it gets lost in the shuffle of social media algorithms if it’s not something that is organically popular.

This is going to change societies around the globe. It’s going to change global society, where the access to large language models may become as important as the Internet itself was – and we had, and still have, digital divides.

Is the question who will be left behind, or who will survive? We’ve propped our civilizations up with all manner of things that are not withstanding the previous changes in technology, and this is a definite leap beyond that.

How do you see the next generations going about their lives? They will be looking for direction, and presently, I don’t know that we have any advice. That means they won’t be prepared.

But then, neither were we, really.

Artifice Girl

With all that’s being marketed as artificial intelligence out there, this could be an interesting movie for at least some people who might like to see a merging of technology and humanity.

If you’re not appreciative of movies driven entirely by dialog, this is not your movie. There’s a bit of suspended disbelief too that may not sit well with some people, but it is a movie and like most things out of Hollywood, it’s pretty easy to find some flaws when compared with the real world.

Still. The idea of using a chatbot to catch pedophiles is not bad. It’s also not new.

If you’ve never heard of Negobot, or Lolita Chatbot, it became public in 2013 – about a decade before ‘Artifice Girl’, and if some of the dialog wasn’t borrowed from that Wikipedia page, I would be surprised.

Even so, it was a pretty good movie. Topical in how we are responsible for what we create, topical in how imperfect we are as a species, and topical about how we ourselves are reflected in our technology, like so many bits of broken glass on the floor sometimes.

Overall, I think it could be a fairly important movie at this time since everyone is agog over large language models such as ChatGPT.

See below for the trailer.

Education and AI

Having coffee with a friend yesterday, I was listening to her talk about meetings with teachers and encouraging her daughter to read at least 30 minutes a day. This is likely mundane conversation for many people, but the minutiae of it is interesting to me.

It is, after all, a system.

Everyone I have met or interacted with that was an educator (or librarian, they don’t get enough love) has been passionate about learning. Educational administrators I know have been passionate about test results. Test results and learning are mutually exclusive.

The tests are to prove to the masses that the educational system is working. They rarely demonstrate what one knows about anything, only that which was tested in tests that students are trained for. Learning, to me, is about not just having knowledge but applying that knowledge in practical ways. Deep understanding is not found through testing, it’s found through interaction- at least in my experience.

Sure, you can memorize equations and plug numbers into them, but unless you know more about the equations themselves, you’re just doing something a computer can do.

So let’s see what computers can do. I asked 3 different large language models, “Has the quality of education declined?”. Here are 3 different responses:

The question of whether the quality of education has declined is complex and can vary across different contexts and regions. It’s challenging to provide a definitive answer as education systems differ significantly worldwide, and numerous factors can influence quality.

While it’s true that some individuals and experts express concerns about declining educational quality, it is essential to consider various factors that impact education. These factors include changes in curriculum, teaching methods, funding, technological advancements, and societal shifts.

To assess the quality of education, it is necessary to examine specific indicators such as student performance on standardized tests, graduation rates, college enrollment rates, and feedback from educators, parents, and students. These indicators can provide insights into the overall effectiveness of education systems.

However, it is worth noting that generalizing the quality of education across all regions or institutions is not accurate, as there can be significant variations within and between countries. It is essential to consider local and contextual factors when evaluating educational quality.


As an AI language model, I cannot have an opinion. However, reports suggest that the quality of education varies by location and institution, with some improving and others declining. Factors like funding, teaching methodology, and teacher quality can also impact the quality of education. Additionally, the increasing demands of technology and the workforce may require updates to curriculum and teaching methods to effectively prepare students for the future.

DeepAI Chat.

There is no definitive answer to this question, as the quality of education can vary greatly depending on the country, school, and individual student. However, there are some general trends that suggest that the quality of education may have declined in recent years.

One trend is that students are spending less time on academic subjects and more time on extracurricular activities. This is likely due to a number of factors, including the increasing pressure to succeed in college and the growing popularity of sports and other activities.

Another trend is that teachers are facing increasing workloads and demands. This can make it difficult for them to provide students with the individual attention they need.

Finally, the curriculum in many schools is becoming more standardized, which can lead to a lack of creativity and innovation.

All of these factors can contribute to a decline in the quality of education. However, it is important to note that there are also many schools and teachers who are working hard to provide students with a high-quality education.

Here are some of the reasons why the quality of education may have declined:

  • The increasing focus on standardized testing has led to a narrowing of the curriculum and a focus on test-taking skills over critical thinking and problem-solving skills.
  • The shortage of qualified teachers has made it difficult to attract and retain the best teachers, and those who are teaching are often overworked and underpaid.
  • The increasing cost of education has made it more difficult for families to afford to send their children to college, which has led to a decline in the number of students who are graduating from college.
  • The rise of technology has made it easier for students to access information outside of the classroom, but it has also made it more difficult for them to focus and learn.

Despite these challenges, there are also many reasons to be optimistic about the future of education. There is a growing movement to reform the education system and to focus on teaching students the skills they need to succeed in the 21st century. There are also many dedicated teachers who are working hard to make a difference in the lives of their students. With hard work and dedication, we can reverse the decline in the quality of education and ensure that all students have the opportunity to succeed.


I was a bit surprised at the differences in the responses and completely unsurprised about the commonality. I wouldn’t expect them to know since nobody knows, and all these large language models (LLM) do is access the same information the particular model is provided when prompted with a question. The reality is that we don’t know, therefore a LLM wouldn’t know.

If anyone thinks that’s not an issue, I think they may have had an education system land on their head in a most grotesque fashion.

We’re getting marketed “artificial intelligence”, machine learning, deep learning, neural networks, large language models and just about every cool phrase that causes nerdgasms.

When we consider the status of education systems for humans – human learning – we don’t even seem to have an idea of where we are. Further, students who were once copying and pasting from the Internet for assignments now get large language models doing their homework – if they can access them.

Something doesn’t seem quite right about that. Certainly, the technology progress is great, but I’m not sure we’re making smarter humans. I’m also not sure we’re making wiser humans.

What’s there to do? There seems to be at least some people thinking about the topic of education and AI, but as a society, are we too busy paying the bills and chasing red dots to have time for the future?

Beware The False ChatGPTs.

It was inevitable. Unscrupulous folks decided to cash in on ChatGPT’s financial successes by tossing some fake ones out there for cell phones – and they’re making some money, fleeceware, etc.

See this article by the experts at Sophos for more.

The immediate list you should delete from your phone at the time of this writing:

  • Open Chat GBT – AI Chatbot App
  • AI Chatbot – Ask AI Assistant
  • AI Chat GBT – Open Chatbot App
  • AI Chat – Chatbot AI Assistant
  • Genie – AI Chatbot
  • AI Chatbot – Open Chat Writer

More on these fake ChatGPT’s can be found on Tom’s Hardware in an in-depth article, which includes how to use the legitimate ChatGPT.

It should go without saying that you should always get software from the source and if deals look too good to be true, they probably are.

AI: Standing on the Shoulders of Technology, Seeking Humanity

“When the mob governs, man is ruled by ignorance; when the church governs, he is ruled by superstition; and when the state governs, he is ruled by fear. Before men can live together in harmony and understanding, ignorance must be transmuted into wisdom, superstition into an illumined faith, and fear into love.”

Manly P. Hall, The Secret Teachings of All Ages.

It’s almost impossible to keep up with all that is going on related to discussion on what’s being marketed as artificial intelligence, particularly with a lot of speculation on how it will impact our lives.

Since the late 1970s, we evolved technology from computers to personal computers to things we carry around that we still call ‘phones’ although their main purposes do not seem to revolve around voice contact. In that time, we’ve gone from having technology on a pedestal that few could reach to a pedestal most of humanity can reach.

It has been a strange journey so far. If we measure our progress by technology, we’ve been successful. That’s a lot like measuring your left foot with your right foot, though, assuming you are equally equipped. If we measure success fiscally and look at the economics of the world, a few people have gotten fairly rich at the expense of a lot of people. If we measure it in knowledge access, more people have access to knowledge than any other time on the planet – but it comes with a severe downside of a lot of misinformation out there.

We don’t really have a good measure of the impact of technology in our lives because we don’t seem to think that’s important outside of technology, yet we have had this odd tendency in my lifetime to measure progress with technology. At the end of my last session with my psychologist, she was talking about trying to go paperless in her office. She is not alone.

It’s 2023. Paperless offices was one of the technological promises made in the late 1980s. That spans about 3 decades. In that same period, it seems that the mob has increasingly governed, superstition has governed the mob, and the states have increasingly tried to govern. It seems as a whole, despite advances in science and technology, we, the mob, have become more ignorant, more superstitious and more fearful. What’s worse, our attention spans seem to have dropped to 47 seconds. Based on that, many people have already stopped reading because of ‘TLDR’.

Into all of this, we now have artificial intelligence to contend with:

…Some of the greatest minds in the field, such as Geoffrey Hinton, are speaking out against AI developments and calling for a pause in AI research. Earlier this week, Hinton left his AI work at Google, declaring that he was worried about misinformation, mass unemployment and future risks of a more destructive nature. Anecdotally, I know from talking to people working on the frontiers of AI, many other researchers are worried too…

HT Tech, “AI Experts Aren’t Always Right About AI

Counter to all of this, we have a human population that clearly are better at multiplying than math. Most people around the world are caught up in their day to day lives, working toward some form of success even as we are inundated with marketing, biased opinions parading around as news, all through the same way we are now connected to the world.

In fact, it’s the price we pay, it’s the best price Web 2.0 could negotiate, and if we are honest we will acknowledge that at best it is less than perfect. The price we pay for it is deeper than the cost we originally thought and may even think now. We’re still paying it and we’re not quite sure what we bought.

“We are stuck with technology when what we really want is just stuff that works.”

Douglas Adams, The Salmon of Doubt.

In the late 1980s, boosts in productivity were sold to the public as ‘having more time for the things you love’ and variations on that theme, but that isn’t really what happened. Boosts in productivity came with the focus in corporations so that the more you did, the more you had to do. Speaking for myself, everyone hired for 40 hour work weeks but demanded closer to 50. Sometimes more.

Technology marketing hasn’t been that great at keeping promises. I write that as someone who survived as a software engineer with various companies over the decades. Like so many things in life, the minutiae multiplied.

“…Generative AI will end poverty, they tell us. It will cure all disease. It will solve climate change. It will make our jobs more meaningful and exciting. It will unleash lives of leisure and contemplation, helping us reclaim the humanity we have lost to late capitalist mechanization. It will end loneliness. It will make our governments rational and responsive. These, I fear, are the real AI hallucinations and we have all been hearing them on a loop ever since Chat GPT launched at the end of last year…”

Naomi Klein, “AI Machines Aren’t ‘Hallucinating’. But Their Makers Are

There was a time when a software engineer had to go from collecting requirements to analysis to design to coding to testing to quality assurance to implementation. Now these are all done by teams. They may well all be done by versions of artificial intelligence in the future, but anyone who has dealt with clients first hand will tell you that clients are not that great at giving requirements, and that has been roled into development processes in various ways.

Then there is the media aspect, where we are all media tourists that are picking our social media adventures, creating our own narratives from what social media algorithms pick for us. In a lot of ways, we have an illusion of choice when what we really get are things that algorithms decide we want to see. That silent bias also includes content paywalled into oblivion, nevermind all that linguistic bias where we’re still discovering new biases.

Large Language Models like ChatGPT, called artificial intelligences with a degree of accuracy, have access to information that may or may not be the same that we may have in our virtual caves. They ‘learn’ faster, communicate faster and perhaps more effectively, but they lack one thing that would make them fail a real Turing test: Being human.

This is not to say that they cannot fake it convincingly by using Bayesian probability to stew our biases into something we want to read. We shouldn’t be too surprised, we put stuff in, we get stuff out, and the stuff we get out will look amazingly like stuff we put in. It is a step above a refrigerator in that we put in ingredients and we get cooked meals out, but just because a meal tastes good doesn’t mean it’s nutritious.

“We’re always searching, but now we have the illusion we’re finding it.”

Dylan Moran, “Dylan Moran on sobriety, his childhood, and the internet | The Weekly | ABC TV + iview

These stabs at humanity with technology are becoming increasingly impressive. Yet they are stabs, and potentially all that goes with stabs. The world limited to artificial intelligences can only make progress within the parameters and information that we give to them. They are limited, and they are as limited as we are, globally, biases and all. No real innovation happens beyond those parameters and information. It does not create new knowledge, it simply dresses up old knowledge in palatable ways very quickly, but what is palatable now may not be so next year. Or next month.

If we were dependent on artificial intelligences in the last century, we may not have had many of the discoveries we made. The key word, of course, is dependent. On the other hand, if we understood it’s limitations and incentivized humanity to add to this borgish collective of information, we may have made technological and scientific progress faster, but… would we have been able to keep up with it economically? Personally?

We’re there now. We’re busy listening to a lot of billionaires talk about artificial intelligences as if billionaires are vested in humanity. They’re not. We all know they’re not, some of us pretend they are. Their world view is very different. This does not mean that it’s wrong, but if we’re going to codify an artificial intelligence with opinions somehow, it seems we need more than billionaires and ‘experts’ in such conversations. I don’t know what the solution is, but I’m in good company.

The present systems we have are biased. It’s the nature of any system, and the first role of a sustainable system is making sure it can sustain itself. There are complicated issues related to intellectual property that can diminish new information being added to the pool balanced with economic systems that, in my opinion, should also be creating the possibility of a livelihood for those who do create and innovate not just in science and technology, but advance humanity in other ways.

I’m not sure what the answers are. I’m not even sure what the right questions are. I’m fairly certain the present large language models don’t have them because we have not had good questions and answers yet to problems affecting us as a species.

I do know that’s a conversation we should be having.

What do you think?

Is Output of ChatGPT Text a Derived Work?

artificial-intelligence-42c97bOne of the things that has bothered me most about ChatGPT is that it’s data was scraped from the Internet, where a fair amount of writing I have done resides. It would be hubris to think that what I wrote is so awesome that it could be ‘stealing’ from me, but it would also be idiotic to think that content ChatGPT produces isn’t derivative in a legal sense. In a world almost critically defined by self-preservation, I think we all should know where the line is. We don’t, really, but we should.

I’m no lawyer, but I’ve had my own ‘fun’ with copyright.

In fact, New Tech Observations from the UK (ntouk) seems to have caught ChatGPT lifting the plot of Alice in Wonderland without any attribution.  There are legal issues here that seem to have been ignored in most of the hype, where even reusing content from ChatGPT could be seen as contributing to the infringement.
That hasn’t really stopped anyone since most people don’t seem to take copyright seriously unless they work for an organization that takes copyright seriously, and even when they do take copyright seriously, it’s only within specific contexts. This is why I point out where I have used a large language model such as ChatGPT for anything, since I’m citing it citing nobody – and even then, I don’t use it for generating content other than some interesting images.

Entities with deep pockets are protected by their deep pockets, but the average person writing on the Internet has less deep pockets – and there are more of us. I’ve had content ‘borrowed’ without attribution. It can range from mildly amusing to outrage, particularly when some schmuck just borrowed to create a popular post without citation so that they could ‘produce’ content that they didn’t actually produce. And Copyright is implicit.

Privacy is a partner to Copyright as well. I’m wondering when the question will be raised about text scraped for these training models by some publishers that deal mainly with text rather than images – because the image lawsuits are happening.

For now, I suppose, don’t put anything online that you wouldn’t want anyone regurgitating without attribution.

Captcha THAT.

childhood complex trauma_When I first started programming, I did a lot of walking. A few months ago I checked the distance I walked every day just back and forth to school and it was about 3.5 km, not counting being sent to the store, or running errands. At the same time, we had this IBM System 36 and a PC Network at school where space was limited, time was limited, and you didn’t have much time to be productive on the computer so you better have it locked down.

At that point, the language was BASIC. The popularity of object oriented programming had not blessed (and cursed) us yet, so we had line numbers on each line, handy for debugging because the most basic errors would tell you where you had a typo. There was an hour every few days to type assignments in so that you could get a grade, or maybe even do something of worth and understand what you were doing.

During that period, can you guess where I did most of my programming? When I was walking around seemingly aimlessly in parking lots, or staring at trees, or anything but staring at a computer monitor. Computers were not plentiful, the time on them was limited, you didn’t have time to screw around on a keyboard.

I have survived decades of programming since then. I still fiddle now and then, but after being beaten to market by Google on getting stuff out (“Set your sights high!”, they tell you…) I’m a bit tired of chasing those particular red dots. My absence from my desk was almost never found tolerable by at least someone who thought what they thought mattered more than results, but I got results. If you saw me typing frantically away at a keyboard, it wasn’t a spur of the moment thing. There was thought that went into crafting that code, there was planning and bullet proofing, to the point where as I became more senior I spent less time at the keyboard than many people in departments I worked in.

I mention all of this because software engineering has changed over the years. In my days, when we were learning we were not given answers from websites like Stack Overflow, we didn’t even have websites. If we were lucky we had the manual for the language, we had plausible typing skills and we had limited time on the machines.

This isn’t ‘walk uphill both ways’, this is, “We did this without all these cool toys you have now”. It’s not an issue of we had it harder, it’s a matter of we did it differently. We didn’t have editors that were forgiving, much less helpful. Within such a short window technology for programming has come a very long way, and it’s kind of cool – except all the silly Python editors and tools apparently written by the children of people who thought that “The Lord of the Rings” book trilogy was evil.

From the 1980s to now, it’s been a real whirlwind with way too much hype on way too many things that nobody recalls immediately. Then the captcha came along, to make sure ‘bots’ weren’t trying to do things, to check if a real human being was involved.

So humanity doubled down on that with large language models like ChatGPT. I guess kids stopped walking to school, they got more computers, and now they don’t even have to do their own homework.

I’m not sure where this is heading, but I’ll be making popcorn.

The Societal Mirror.

web_humanity in ai_individualThe article, “Introducing the AI Mirror Test, which very smart people keep failing“, hits some pretty good points on these large language models that we are lazily calling Artificial Intelligence. One of my favorites is this:

…What is important to remember is that chatbots are autocomplete tools. They’re systems trained on huge datasets of human text scraped from the web: on personal blogs, sci-fi short stories, forum discussions, movie reviews, social media diatribes, forgotten poems, antiquated textbooks, endless song lyrics, manifestos, journals, and more besides. These machines analyze this inventive, entertaining, motley aggregate and then try to recreate it. They are undeniably good at it and getting better, but mimicking speech does not make a computer sentient…

As I pointed out in a post on ChatGPT and large language models, such as ‘A Chat With GPT on AI‘, I recognized that it was meeting my cognitive bias. In that regard, I recognized some of myself in what I was getting back, not too different from when I was playing with Eliza in the 1980s with the only difference being that the bot has gotten better because it has access to more information than what the user types in. We were young, we dreamed, but tech wasn’t ready yet.
web_humanity in ai_group
Of course it’s a mirror of what ourselves in that regard – but the author didn’t take it to the next step. As individuals we should be seeing ourselves in the output, but we should also understand that it’s also global society’s mirror as well, and all the relative good and relative bad that comes with it. We have biases in content based on language, on culture, on religion, and on much more. I imagine the Amish don’t care, but still they are part of humanity and we have a blind spot there, I’m certain, never-mind all the history that our society has erased and continues to erase, or has simply ignored.

Personally, I find it a great way to poll the known stores of humanity on what it’s biases believe, no matter how disturbing the results can be. And yet, we’re already likely diluting our own thoughts reflected back at us as marketers and bloggers (not mutually exclusive) churn content out of Large Language Models that they will eventually train on. That’s not something I’m comfortable with, and as usual, my problem isn’t so much technology as society, a rare thing for me to say when so much technology is poorly designed. Am I ‘victim shaming’?

When the victim is the one abusing themself, can it be victim shaming?

Our own echo chambers are rather shameless.

Through A Blurry Looking Glass.

_web_Reviewing Code Frosted GlassI’ve been spending, like so many, an inordinate amount of time considering the future of what we accuse of being artificial intelligence, particularly since I’ve been focusing on my writing and suddenly we have people getting things written for them by ChatGPT. I’ll add that the present quality doesn’t disturb me as much as the reliance on it.

Much of what these artificial intelligences pull from is on the Internet, and if you’ve spent much time on the Internet, you should be worried. It goes a bit beyond that if you think a bit ahead.

Imagine, if you would, artificial intelligences quoting artificial intelligences trained by artificial intelligences. It’s really not that far away and may have already begun as bloggers looking to capitalize on generating content quickly thrash their keyboards to provide prompts to ChatGPT and it’s ilk to create blog posts such that when they market their content it pops up in search engine results. Large language models (of which ChatGPT is one) suddenly think this is great content because what is repeated most makes predictive models say, “Aha! This must be what they mean or want!”.

long email ai

This cartoon at pokes at the problem with “AI Written, AI Read”, but not the recursive issue. Fortunately, on a personal blog I can use it otherwise it would take us down the path of copyright and the stable diffusion lawsuit.

This has the potential to be a scary future. People who look forward a few steps have started an open letter, which of course I signed, asking to Pause Giant AI Experiments. It’s a symbolic thing, I fully expect, because the people who are behind these artificial intelligences aren’t really too interested in much more than their bottom line, with the notable exception of as far as I know.

Meanwhile, so many seem to think that these Large Language Models are a nice replacement for the nerds doing their homework because they don’t have to hang out with them. There’s a reason I bring this up. One of the reasons Artificial Intelligences, Large Language Model or otherwise, are considered creative is because they are basically well read people with mega-gobs of information at their digital synapses. They make it easier to find the intersectionality, a realm that has become more and more important and less and less thought about in our society where specializations require diplomas to do things that… one really doesn’t need a diploma to do.

And it doesn’t know everything. For example, I know a bit about Trinidad and Tobago, so I posed a question to ChatGPT related to the economy of Trinidad and Tobago (response in italics):

What are good exports that Trinidad and Tobago could focus on to help build the economy and better the socioeconomic status of it’s citizens?

Trinidad and Tobago has a diversified economy with a strong focus on the energy sector, particularly oil and gas production. However, there are several other sectors that the country could focus on to diversify its exports and boost its economy. Some potential export industries for Trinidad and Tobago include:

Tourism: Trinidad and Tobago has a rich cultural heritage, beautiful beaches, and unique natural attractions such as the Caroni Swamp and the Asa Wright Nature Centre. The country has the potential to further develop its tourism industry and attract visitors from around the world.

Agriculture: Trinidad and Tobago has a favorable climate for agriculture and could focus on exporting products such as cocoa, coffee, citrus fruits, and vegetables. The country could also explore the potential for organic farming and niche products such as spices and herbs.

Creative industries: Trinidad and Tobago has a vibrant cultural scene and could focus on exporting its music, film, and other creative products. The country has a rich tradition of Carnival, which could be further developed as a major tourist attraction and cultural export.

Information and communication technology (ICT): Trinidad and Tobago has a well-educated and tech-savvy workforce, and the country could focus on developing its ICT sector and exporting software, IT services, and other digital products.

Manufacturing: Trinidad and Tobago could focus on developing its manufacturing industry and exporting products such as chemicals, plastics, and processed foods.

Of course, developing any of these industries would require significant investment and effort, and there may be other factors to consider such as infrastructure, labor costs, and market demand. However, focusing on these industries could help diversify the economy and provide new opportunities for the citizens of Trinidad and Tobago.

Sounds very good, right? Devil is in the details.

Tourism is a definite possibility, but the decades long crime situation (which myself and others believe is because of socioeconomics related to the lack of diversity in the economy), as well as flash flooding and a focus on marketing rather than quality… no. I do like that it mentioned the Asa Wright center, and if anyone actually does come down this way, I can happily point you to other places that you won’t find in the tourist brochures.

Agricultural land has been used by the the government to build housing, so arable land use is decreasing with every the Housing Development Corporation creates, as well as with every agricultural plot of land converted to residential, commercial or industrial depending on who greases the wheels.

Manufacturing would be brilliant. Very little is made in Trinidad and Tobago, but if you’re going to be competing with robots and artificial intelligences in the developed world, we can throw that out.

ICT is my personal favorite, coming from a chatbot that has already got people generating code with it. Seriously, ChatGPT?

Everything ChatGPT has presented has been said more than once in the context of diversifying the economy of Trinidad and Tobago, and it’s a deep topic that most people only understand in a very cursory way. The best way to judge an economy is to observe it over time. In the grand scale of global discourse, the estimated population of 1.5 million people in a dual island nation is not as interesting to the rest of the world as Trinbagonians would like to think it is – like any other nation, most people think it’s the center of the universe – but it’s not a big market, for opportunities young intelligent people leave as soon as they can (brain drain), and what we are left with aspires to mediocrity while hiring friends over competency. A bit harsh, but a fair estimation in my opinion.

How did ChatGPT come up with this? With data it could access, and in that regard since it’s a infinitesimal slice of the global interest, not much content is generated about it other than government press releases by politicians who want to be re-elected so that they can keep their positions, a situation endemic to any democracy that elects politicians, but in Trinidad and Tobago, there are no maximum terms for some reason. A friend sailing through the Caribbean mentioned how hard it was to depart an island in the Caribbean, and I responded with, “Welcome to the Caribbean, where every European colonial bureaucracy has been perpetuated into stagnancy.

The limitations using Trinidad and Tobago as a test case, an outlier of data in the global information database that we call the internet, can be pretty revealing in that there is a bias it doesn’t know about because the data it feeds on is in itself biased, and unlikely to change.

But It’s Not All Bad.
I love the idea that these large language models can help us find the intersectionality between specialties. Much of the decades of my life have been spent doing just that. I read all sorts of things, and much of what I have done in my lifetime has been cross referencing ideas from different specialties that I have read up on. I solved a memory issue in a program on the Microsoft Windows operating system by pondering Costa Rican addresses over lunch one day. Intersectionality is where many things wander off to die these days.

Sir Isaac Newton pulled from intersection. One biography describes him as a multilingual alchemist, whose notes were done in multiple languages which, one must consider, is probably a reflection of his internal dialogue. He didn’t really discover gravity – people knew things fell down well before him, I’m certain – but he was able to pull from various sources and come up with a theory that he could publish, something he became famous for, and something in academia that he was infamous for with respect to the politics of academia.

J.R.R Tolkien, who has recently had a great movie done on his life, was a linguist who was able to pull from many different cultures to put together fiction that has transcended beyond his death. His book, “The Hobbit”, and the later trilogy of “The Lord of the Rings” have inspired various genres of fantasy fiction, board games and much more. 

These two examples show how pulling from multiple cultures and languages, and specialties, are historically significant. Large Language Models are much the same.

Yet there are practical things to consider. Copyrights. Patents. Whether they are legal entities or not. The implicit biases on what they are fed, with the old software engineering ‘GIGO’ (Garbage in, garbage out) coming to mind with the potential for irrevocable recursion of supercharging that garbage and spewing it out to the silly humans who, as we have seen over the last decades, will believe anything. Our technology and marketing of it are well beyond what most people can comprehend.

We are sleeping, and our dreams of electric sheep come with an invisible electric fence with the capacity to thin the herd significantly.


A Chat With ChatGPT On AI.

Interviewing Chat GPTIn my youth, I fiddled with Eliza. For decades I read books on AI, and thought deeply on the subject in many ways. In later decades, I would find myself playing with neural networks at home even while I did the drudgery of coding and debugging systems that made the term ‘software archaeology‘ very real to me. I’m no expert on this. I am, however, a well thought layperson.
I was writing an article about this here, and I had enough false starts that I decided to try a new approach. There’s just so many ways to discuss artificial intelligence, so many perspectives, and of course the din of voices shouting their thoughts on the Internet can be distracting.

It seems that every 5 minutes someone is posting something new about artificial intelligence, but it’s generally not true. They’re posting the same things about artificial intelligence in different ways, in the hope that some will be popular. It’s  the way of the Internet. It’s the way of Media. It is, with apologies to the Mandalorian, “The Way”.

I decided to have a chat with ChatGPT, and without further ado, here are the results. My prompts are in bold, the response in italics.

I was wondering what you thought about artificial intelligence and colonialism.

The relationship between artificial intelligence (AI) and colonialism is a complex and multi-faceted one. On one hand, AI has the potential to exacerbate existing colonial power dynamics by reinforcing existing biases and inequalities. For example, if AI systems are trained on data sets that are biased against certain groups or cultures, they may perpetuate these biases in their decision-making processes. Similarly, if AI is used to automate tasks that have traditionally been performed by workers in developing countries, it could exacerbate economic inequalities and deepen the divide between the global North and South.

On the other hand, AI also has the potential to challenge colonial power structures by enabling greater access to information and knowledge. For example, AI-powered translation technologies can help to break down language barriers and facilitate cross-cultural communication. Similarly, AI can be used to analyze and interpret historical data in new ways, helping to challenge dominant narratives and promote a more nuanced understanding of the past. Ultimately, the relationship between AI and colonialism is shaped by a complex interplay of factors, including the specific ways in which AI is developed, deployed, and regulated, as well as the broader social, economic, and political contexts in which it operates. It is important for researchers, policymakers, and practitioners to be mindful of these dynamics and work to ensure that AI is developed and deployed in ways that promote social justice, equity, and respect for human rights.

As you can see I leapt right into the deep end, and it reflected close to my own thoughts on what the answer should be, which feeds my own cognitive bias without revealing anything new to me. Understanding our own cognitive biases when dealing with ChatGPT and others like it seems important after my first ‘session’ with ChatGPT – so with every response, I’ll ask you, gentle reader, consider your own cognitive biases as you read this.

Should Artificial Intelligences be regulated, and if so, by whom?

Yes, I believe that artificial intelligence (AI) should be regulated to ensure that it is developed and deployed in ways that are safe, ethical, and aligned with societal values. Regulation is necessary because AI systems have the potential to cause harm, whether through unintentional errors, biases, or malicious use. In addition, the rapid pace of AI development means that existing legal and regulatory frameworks may not be sufficient to address the unique challenges posed by these technologies.

The question of who should regulate AI is a complex one that involves a range of stakeholders, including government bodies, industry organizations, and civil society groups. Ultimately, the responsibility for regulating AI should be shared across multiple actors, each with their own strengths and expertise. Governments play a critical role in setting legal and ethical standards for AI development and deployment, as well as in enforcing these standards through regulation and oversight. Industry organizations can contribute by developing best practices and standards for AI development, sharing knowledge and expertise, and promoting transparency and accountability in AI systems. Civil society groups can provide critical oversight and advocacy, helping to ensure that AI systems are aligned with societal values and are being used in the public interest.

Effective regulation of AI will require ongoing collaboration and dialogue among these different stakeholders, as well as ongoing monitoring and adaptation in response to new developments and challenges. It will also require a commitment to transparency, accountability, and public engagement, so that the benefits and risks of AI are fully understood and can be balanced appropriately.

This is also something that went with my cognitive bias, though with one exception: I personally tend to go against government regulation on many things because it’s hard to find things that governments around the world haven’t consistently screwed up somehow. Those big wheels can get messy, and it depends a lot on who is choosing the direction.

If we delve into it’s answer further, there’s a lot of ambiguity hidden under familiar words. “Safe, ethical, and aligned with societal values” sounds very good until you start deciding what is safe, what is ethical, and what is aligned with which societal values. In it’s own way, it’s putting the responsibility on the people who will be impacted by artificial intelligences in socioeconomic ways. Consider, in any system accused of democracy (we’ll get to that), if it works as it’s supposed to, and people are being put on the bread line because of artificial intelligences rather than those people who cross lines on maps… what do you think the response will be democratically? I didn’t take it in that direction, though, because I’m meeting a new bot and I’m playing with it.

Instead, I explored in a new direction.

What do you think of the quotation by Henry David Thoreau, where ‘men have become the tools of their tools’?

The quotation by Henry David Thoreau, where “men have become the tools of their tools,” speaks to the idea that technology can shape our behavior and way of life, sometimes in ways that we do not fully understand or intend. This idea reflects a broader concern that our relationship with technology is not always one of control and agency, but can be characterized by a sense of dependency and even subservience.

Thoreau’s quotation raises important questions about the role of technology in our lives, and the extent to which we are able to shape and control its impact. In some cases, our use of technology may reflect our values and priorities, and enable us to achieve our goals in more efficient and effective ways. However, in other cases, our use of technology may be driven by external forces, such as corporate interests or social pressures, and may have unintended consequences that we do not fully appreciate.

To avoid becoming “tools of our tools,” it is important to approach technology with a critical and reflective mindset, and to be mindful of its potential impact on our lives and society as a whole. This requires a willingness to question our assumptions and beliefs about technology, and to engage in ongoing dialogue and debate about its role in shaping our world. It also requires a commitment to developing ethical frameworks and policies that guide the development and use of technology, and that ensure that it serves the needs and aspirations of all members of society.

It’s drawn upon more than just facts here, and without the keyword of ‘artificial intelligence’ or ‘AI’, it gives a more broad result – which should be expected, and I did. The analysis given smacks of a lot of things written about Thoreau’s quote, which I imagine may be within the data it has drawn upon. In a world where people worry about ChatGPT doing our homework, we are ourselves it’s homework. How do I come to that conclusion? This was a response where it used, “our” inclusively. Go look again.

Am I impressed? Of course, but not in the ways that other people are shouting from the rooftops. I understand, at least in the broad strokes, how it works. It’s not sentient. But it draws upon the information we give it, it pulses with the money that it’s funded with, and it gives us a reflection of what we have to say.


And there’s more to come.