Trinidad and Tobago and/vs AI.

When I wrote ‘Artificial Extinction‘, I briefly touched on coverage related to artificial intelligence here in Trinidad and Tobago. It’s hard to explain just how out of mind it is, so I’ll just write a bit of the local scene.

Today, as I stood in line waiting an annoying amount of time waiting to pay for the 5l bottle of water at a local convenience store, I glanced at the headlines. As usual, there was someone having trouble with something at the head of the line, the other register was closed, and the line formed.

One of the benefits of that line is that I get to run my eyes across the front pages of the local newspapers: Newsday, Trinidad Express and Trinidad Guardian.

The Rastafarian gentleman in front of me found something of interest in the Trinidad Express. I saw something about the need for Constitutional Reform, a picture of “Indian Arrival Day Stalwarts”, ‘Paradise in Peril’ and a plea from the mother of a kidnapping victim. Having been back and forth over the decades, the news seems to say the same with only names changing. The politicians play politics, the crime has spiraled so long that it is now in control of the criminals, and nobody has fresh ideas. They all seem to be foreign and abused ideas, much like some of the used cars you can buy from Japan.

This is the canvas upon which local news is painted daily. I thought about seeing Trinidad and Tobago represented on Planet Earth (Episode 6) through Grand Riviere Village’s volunteer work to assist and protect the leatherback turtles. when I did a web search, I found the leatherback turtle site offline (something I’m considering digging into). That’s a shame. Keeping a website online for something with international attention seems important.

I get home, walking past the condo’s office, I wave briefly at the administrator who was busy talking with someone. 15 years as a corporate secretary, retired, decades of experience that could soon be replaced with something purchased off the shelf. The latent thought of my own experience being replaced looms quietly in the background as I enter the elevator, my thoughts on how to connect the local perspective on technology and thus artificial intelligence to the larger global perspective of “this could end very badly“.

My friends and neighbors are more worried about their family’s security than some online application spitting out gobs of text when asked a question. In a land where there are no questions, no one needs an oracle. The economic diversity of Trinidad and Tobago is simply not there, the oil money stolen or squandered (or both), and the youths see increasingly little opportunity outside of crime, as we talked about while I was at the barber shop last week.

Artificial intelligence is not going to help with these things, because these are largely broken systems that those who profit from do not want to fix. ChatGPT can go blue in the face telling the politicians what they should do. They’ve been told what it has to offer thousands of times before over the decades. The faces largely have not changed, only grown older and in one case distinctly more cadaverous.

Years ago, I had a Minister message me once because something I wrote, and he asked where I got the data from – I cited the source that he should have been aware of, the open data portal of Trinidad and Tobago. He was agog. He’d been asking for that information for over a year and no one seemed to know where it was. The website has since been updated, the data not so much.

Meanwhile, the largest employer in Trinidad and Tobago is the government, where many good people participate in overcomplicated wheels of bureaucracy. We could use technology to replace much of that, but then where would the people work? And since they vote, who would they vote for if they lose their jobs?

With this context, now, I can now discuss AI in Trinidad and Tobago in the context of jobs, particularly the last 3 paragraphs:

“…Taking charge of this rapidly evolving scenario of workplace change will demand one fundamental and overdue evolution in governance, the continuous gathering and distribution of actionable information about how this country operates.

It was a note that Jonathan Cumberbatch, Assistant VP, Human Resources and Administration at UTT touched on cautiously when he noted that, “Data drives most of the conversation outside of TT, but we don’t have a sense of that in TT.”

The propensity of governance to proceed on feelings, hunches and political expedience might have worked in the past, but the national distaste for transparently gathered, publicly available information cannot continue into an era hallmarked by a reliance on reliable, continuously updated datasets.”

AI and your job“, Mark Lyndersay, TechNewsTT and BitDepth#1408 for May 29, 2023

Of course, it wasn’t a global roundup of people related to AI, just those with local interests talking to the local Chamber of Commerce related to their products. Microsoft was definitely there, others… not here.

The short answer is that Trinidad and Tobago isn’t ready. Neither is most of the rest of the world, which is why there’s concern by some. I’ve seen firsthand government offices and even business offices completely ignore data driven approaches. Just recently, I proposed starting with the basics in the condo’s office, only to hear that without actual data they’re just pushing forward into a ticket system to solve all the problems. In time they will find it creates new ones, but that will be another story.

The point is that if you can’t even do data driven stuff, keep a volunteer website up when there’s international attention, the wave of artificial intelligence that will drive the world economy will leave many people stranded on islands, perhaps even twin island Republics. What will be done about this?

Maybe they’ll talk about it in Parliament. Then, if history repeats itself, nothing will happen.

Or, things could change. Things definitely should change, but those changes need to happen faster and faster as the government slides into the Pitch Lake, dragging it’s citizens with it. .

Artificial Extinction.

The discussion regarding artificial intelligence continues, with the latest round of cautionary notes making the rounds. Media outlets are covering it, like CNBC’s “A.I. poses human extinction risk on par with nuclear war, Sam Altman and other tech leaders warn“.

Different versions of that article written by different organizations are all over right now, but it derives from one statement on artificial intelligence:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Center for AI Safety, Open Letter, undated.

It seems a bit much. Granted, depending on how we use AI we could be on the precipice of a variety of unpredictable catastrophes, and while pandemics and nuclear war definitely poses direct physical risks, artificial intelligence poses more indirect risks. I’d offer that can make it more dangerous.

In the context of what I’ve been writing about, we’re looking at what we feed our heads with. We’re looking at social media being gamed to cause confusion. These are dangerous things. Garbage in, Garbage out doesn’t just apply to computers – it applies to us.

More tangibly, though, it can adversely impact our way(s) of life. We talk about the jobs it will replace, with no real plan on how to employ those displaced. Do people want jobs? I think that’s the wrong question that we got stuck with in the old paint on society’s canvas. The more appropriate question is, “How will people survive?”, and that’s a question that we overlook because of the assumption that if people want to survive, they will want to work.

Is it corporate interest that is concerned about artificial intelligence? Likely not, they like building safe spaces for themselves. Sundar Pichai mentioned having more lawyers, yet a lawyer got himself into trouble when he used ChatGPT to write court filings:

“The Court is presented with an unprecedented circumstance,” Castel wrote in a previous order on May 4. “A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases… Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”

The filings included not only names of made-up cases but also a series of exhibits with “excerpts” from the bogus decisions. For example, the fake Varghese v. China Southern Airlines opinion cited several precedents that don’t exist.”

Lawyer cited 6 fake cases made up by ChatGPT; judge calls it “unprecedented”“, Jon Brodkin, ArsTechnica, May 30th 2023

It’s a good thing there are a few people out there relying on facts instead of artificial intelligence, or we might stray into a world of fiction where those that control the large language models and general artificial intelligences that will come later will create it.

Authoritarian governments could manipulate machine learning and deep learning to assure everyone’s on the same page in the same version of the same book quite easily, with a little tweaking. Why write propaganda when you can have a predictive text algorithm with a thesaurus of propaganda strapped to it’s chest? Maybe in certain parts of Taliban controlled Afghanistan, it will detect that the user is female and give it a different set of propaganda, telling the user to stay home and stop playing with keyboards.

It’s not hard to imagine all of this. It is a big deal, but in parts of the world like Trinidad and Tobago, you don’t see much about it because there’s no real artificial intelligence here, even as local newspaper headlines indicate real intelligence in government might be a good idea. The latest article I found on it in local newspapers online is from 2019, but fortunately we have TechNewsTT around discussing it. Odd how that didn’t come up in a Google search of “AI Trinidad and Tobago”.

There are many parts of the world where artificial intelligence is completely off the radar as people try to simply get by.

The real threat of any form of artificial intelligence isn’t as tangible as nuclear war or pandemics to people. It’s how it will change our way(s) of life, how we’ll provide for families.

Even the media only points at that we want to see, since the revenue model is built around that. The odds are good that we have many blind spots that the media doesn’t show us even now, in a world where everyone who can afford it has a camera and the ability to share information with the world – but it gets lost in the shuffle of social media algorithms if it’s not something that is organically popular.

This is going to change societies around the globe. It’s going to change global society, where the access to large language models may become as important as the Internet itself was – and we had, and still have, digital divides.

Is the question who will be left behind, or who will survive? We’ve propped our civilizations up with all manner of things that are not withstanding the previous changes in technology, and this is a definite leap beyond that.

How do you see the next generations going about their lives? They will be looking for direction, and presently, I don’t know that we have any advice. That means they won’t be prepared.

But then, neither were we, really.

Gaming The Medium

Even as we paint on society’s canvas, society paints on our individual canvases, and in this modern world of the Internet, social media and games, there’s a lot of paint being thrown around. Our world changes us, we change our world.

It’s not all as pretty as staged videos on Instagram, TikTok and Facebook reels, where ‘influencers’ do their best to find attractive red dots for people to chase. It’s in their interest. Before the Internet, it was broadcast media, but now with social media there is an increasingly large illusion of being able to interact when we might just be interacting with some algorithms attached to a dictionary.

Algorithms, though, carry dangers.

Via CuriosityGuide.

The video outlines some of what has been happening that isn’t good.

Algorithms, though, are important and can be used for good. We don’t see that as much as we should, largely because the wide swath of algorithms seem to be at the least questionable in whether they are good or not. That questionability comes from what we all want to see from the world and what cost we wish to pay for it – or, in the case of Internet trolls, the cost which we wish to have others pay for the world they want to see. I have more to write about trolls, but not yet.

What do we want? Before we figure out who we are, we seem to be told who we need to be. We mimic behaviors as children, and we grow within the framework supplied by our environments – rewards and punishments are set. We begin playing the game. In an environment, or, subjectively, an anti-environment.

Life In the Anti-Environment: Learning How To Play is an interesting paper by Adam Pugen in this regard – you can find the PDF of the paper here. It’s focused on video games, yet much of what is in there could apply to social media since the world is increasingly contrived and served through flat screens. This contrivance has been noted and mocked by more than one person. This German artist is a wonderful example, mocking instagram photos.

In any game, there are things that are possible and things that are less possible. One of the more common real world games, a lottery will sell us on the fact that there is a possibility to win despite there being a extremely low probability. The lottery has the distinction of being forced to be honest about the odds, but I have yet to see that honesty in the advertising for a lottery. What do you spend, what do you get? Most people see spending a few dollars every week over the course of their lifetime a worthwhile risk – otherwise there would be no lottery.

The game environment is simply defined. Enter the world of multiplayer games, which connect people through the internet and allow them to interact within certain guidelines. People, of course, find the loopholes and some enjoy the anonymous trolling aspect since they are faceless names and avatars. Others try to play the game by plodding through, others pay to get ahead, all depending on the game and how it is set up. If that doesn’t sound like a metaphor for modern social media, I don’t know what is.

All around the world, people are playing the social media game. How one ‘wins’ is dependent on how one views success, just like everyone else, but since social media is attached to real life more closely than other games there is the financial aspect that is quite real for the majority of the planet. How one loses, implicitly, is by not winning.

Now that we have large language models and the promises of artificial intelligence making things so much better, the game is more complicated.

If money is how we measure success, there are billions of people losing. We could change how we could measure success, or we could change the odds. Right now, the odds seem to be going the wrong way. There has to be some middle ground between tossing out participation trophies and a few winners taking all.


You Ain’t Just the Medium.

There are some topics I’ve been writing about that people may not realize are connected, but they are. When I wrote about how we humans, we algorithms are doing bonsai on ourselves and artificial intelligences, it was not just happenstance.

We are a medium. Just one on the planet, but we are a medium, built upon a medium of a planet, and we’re building other mediums even while we interact in increasingly positive ways with other mediums as we grow to understand them.

The medium is the message.

Marshall McLuhan, Understanding Media : The Extensions of Man (1964)

This is important to understand. Regardless of how you believe this world came into being, we all should know by now about DNA and how we recognize that other living creatures also have DNA. Some of it is close to matching ours, but the results are different to us.

We’re a 96% match to chimpanzees, and I’m fairly certain chimpanzees know we’re very different from them in many ways.

Our DNA varies within our species as well, with what we call recessive and dominant genes and all their complexity of impacting everything from hair color to deciding whether our big toe is dominant on our feet or not.

We have social attributes, which could also be seen as mediums since they too are canvases upon we decorate our pieces of time. Language, culture, religion (or lack thereof) are some of the substrates upon which we grow our own mediums.

We aren’t just surrounded by information. We are information. We are history without words, a strand of our DNA tells us the path we traversed through time to get where we are.

It doesn’t tell us why we traversed the particular path we got here by. That’s for the archaeologists and others to piece together from fragments of physical history.

We are our own bonsai, where our trunk and branches show where we have grown from – the trail through time and the history of how we got where we are.

Each one of us, as an individual, has our own root system, our own trunk, our own branches. Each one of us is both medium and message, impacting our societal medium and message, all part of a planetary ecosystem of mediums and messages.

Everything we see has information, from the tree outside that has evolved over millions of years to that one specimen, to the street which follows a path dictated by many human variables.

If we stand back and look at our planet, allowing ourselves to fade into the background, we’re not too far of Douglas Adams‘ allegory of the Earth being a computer designed to figure out the meaning of life. In fact, many people who have read Douglas Adams don’t necessarily understand how true it is.

It’s sort of like the Gaia hypothesis, though there are issues with mixing metaphor with mechanism, among other things. Popular thought on evolution ascribes intentionality to evolution, as if there were some guide to it all, but adaptation to survive is quite possibly the only intention.

We tend to ascribe intention and look for intention where there may be none. Intention makes us feel more comfortable, but it isn’t necessarily true.

“This is rather as if you imagine a puddle waking up one morning and thinking, ‘This is an interesting world I find myself in — an interesting hole I find myself in — fits me rather neatly, doesn’t it? In fact it fits me staggeringly well, must have been made to have me in it!’ This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, frantically hanging on to the notion that everything’s going to be alright, because this world was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.”

Douglas Adams, The Salmon of Doubt (2002)

We tend to think ourselves the center of our universe of existence, and we often treat ourselves as the North Star of the planet. This is likely natural; everything we know, every bit of information we process, comes to us through our senses.

Although the medium is the message, the controls go beyond programming. The restraints are always directed to the “content,” which is always another medium. The content of the press is literary statement, as the content of the book is speech, and the content of the movie is the novel. So the effects of radio are quite independent of its programming.

Marshall McLuhan, Understanding Media (1964)

This is why McLuhan balanced what he said in his over-used quote, “The medium is the message” with what was the technological equivalent of machine learning of his time: Radio.

Radio connected a world over distances previously daunting during that period, and while mostly a broadcast medium then, his focus needs to be understood.

Communication has evolved long beyond that in a little over half a century. ‘Programming’, now thanks to Web 2.0, is a matter of choosing people’s social media messages so they build their own narratives. Web 2.0 provided us the illusion of choice.

The medium was the message, the message became medium, the media became the message, and so on.

We forget that we, too, are medium, though we don’t altogether understand the message and maybe we’re in the process of finding out what that is.

It gets deeper, too, but I’ll leave you with one more quotation from McLuhan, who happened to say and write quite a few things that continue to make sense to this day.

Media are means of extending and enlarging our organic sense lives into our environment.

Marshall McLuhan, “The Care and Feeding of Communication Innovation”, Dinner Address to Conference on 8 mm Sound Film and Education, Teachers College, Columbia University, 8 November 1961.

Artifice Girl

With all that’s being marketed as artificial intelligence out there, this could be an interesting movie for at least some people who might like to see a merging of technology and humanity.

If you’re not appreciative of movies driven entirely by dialog, this is not your movie. There’s a bit of suspended disbelief too that may not sit well with some people, but it is a movie and like most things out of Hollywood, it’s pretty easy to find some flaws when compared with the real world.

Still. The idea of using a chatbot to catch pedophiles is not bad. It’s also not new.

If you’ve never heard of Negobot, or Lolita Chatbot, it became public in 2013 – about a decade before ‘Artifice Girl’, and if some of the dialog wasn’t borrowed from that Wikipedia page, I would be surprised.

Even so, it was a pretty good movie. Topical in how we are responsible for what we create, topical in how imperfect we are as a species, and topical about how we ourselves are reflected in our technology, like so many bits of broken glass on the floor sometimes.

Overall, I think it could be a fairly important movie at this time since everyone is agog over large language models such as ChatGPT.

See below for the trailer.

Education and AI

Having coffee with a friend yesterday, I was listening to her talk about meetings with teachers and encouraging her daughter to read at least 30 minutes a day. This is likely mundane conversation for many people, but the minutiae of it is interesting to me.

It is, after all, a system.

Everyone I have met or interacted with that was an educator (or librarian, they don’t get enough love) has been passionate about learning. Educational administrators I know have been passionate about test results. Test results and learning are mutually exclusive.

The tests are to prove to the masses that the educational system is working. They rarely demonstrate what one knows about anything, only that which was tested in tests that students are trained for. Learning, to me, is about not just having knowledge but applying that knowledge in practical ways. Deep understanding is not found through testing, it’s found through interaction- at least in my experience.

Sure, you can memorize equations and plug numbers into them, but unless you know more about the equations themselves, you’re just doing something a computer can do.

So let’s see what computers can do. I asked 3 different large language models, “Has the quality of education declined?”. Here are 3 different responses:

The question of whether the quality of education has declined is complex and can vary across different contexts and regions. It’s challenging to provide a definitive answer as education systems differ significantly worldwide, and numerous factors can influence quality.

While it’s true that some individuals and experts express concerns about declining educational quality, it is essential to consider various factors that impact education. These factors include changes in curriculum, teaching methods, funding, technological advancements, and societal shifts.

To assess the quality of education, it is necessary to examine specific indicators such as student performance on standardized tests, graduation rates, college enrollment rates, and feedback from educators, parents, and students. These indicators can provide insights into the overall effectiveness of education systems.

However, it is worth noting that generalizing the quality of education across all regions or institutions is not accurate, as there can be significant variations within and between countries. It is essential to consider local and contextual factors when evaluating educational quality.


As an AI language model, I cannot have an opinion. However, reports suggest that the quality of education varies by location and institution, with some improving and others declining. Factors like funding, teaching methodology, and teacher quality can also impact the quality of education. Additionally, the increasing demands of technology and the workforce may require updates to curriculum and teaching methods to effectively prepare students for the future.

DeepAI Chat.

There is no definitive answer to this question, as the quality of education can vary greatly depending on the country, school, and individual student. However, there are some general trends that suggest that the quality of education may have declined in recent years.

One trend is that students are spending less time on academic subjects and more time on extracurricular activities. This is likely due to a number of factors, including the increasing pressure to succeed in college and the growing popularity of sports and other activities.

Another trend is that teachers are facing increasing workloads and demands. This can make it difficult for them to provide students with the individual attention they need.

Finally, the curriculum in many schools is becoming more standardized, which can lead to a lack of creativity and innovation.

All of these factors can contribute to a decline in the quality of education. However, it is important to note that there are also many schools and teachers who are working hard to provide students with a high-quality education.

Here are some of the reasons why the quality of education may have declined:

  • The increasing focus on standardized testing has led to a narrowing of the curriculum and a focus on test-taking skills over critical thinking and problem-solving skills.
  • The shortage of qualified teachers has made it difficult to attract and retain the best teachers, and those who are teaching are often overworked and underpaid.
  • The increasing cost of education has made it more difficult for families to afford to send their children to college, which has led to a decline in the number of students who are graduating from college.
  • The rise of technology has made it easier for students to access information outside of the classroom, but it has also made it more difficult for them to focus and learn.

Despite these challenges, there are also many reasons to be optimistic about the future of education. There is a growing movement to reform the education system and to focus on teaching students the skills they need to succeed in the 21st century. There are also many dedicated teachers who are working hard to make a difference in the lives of their students. With hard work and dedication, we can reverse the decline in the quality of education and ensure that all students have the opportunity to succeed.


I was a bit surprised at the differences in the responses and completely unsurprised about the commonality. I wouldn’t expect them to know since nobody knows, and all these large language models (LLM) do is access the same information the particular model is provided when prompted with a question. The reality is that we don’t know, therefore a LLM wouldn’t know.

If anyone thinks that’s not an issue, I think they may have had an education system land on their head in a most grotesque fashion.

We’re getting marketed “artificial intelligence”, machine learning, deep learning, neural networks, large language models and just about every cool phrase that causes nerdgasms.

When we consider the status of education systems for humans – human learning – we don’t even seem to have an idea of where we are. Further, students who were once copying and pasting from the Internet for assignments now get large language models doing their homework – if they can access them.

Something doesn’t seem quite right about that. Certainly, the technology progress is great, but I’m not sure we’re making smarter humans. I’m also not sure we’re making wiser humans.

What’s there to do? There seems to be at least some people thinking about the topic of education and AI, but as a society, are we too busy paying the bills and chasing red dots to have time for the future?

Beware The False ChatGPTs.

It was inevitable. Unscrupulous folks decided to cash in on ChatGPT’s financial successes by tossing some fake ones out there for cell phones – and they’re making some money, fleeceware, etc.

See this article by the experts at Sophos for more.

The immediate list you should delete from your phone at the time of this writing:

  • Open Chat GBT – AI Chatbot App
  • AI Chatbot – Ask AI Assistant
  • AI Chat GBT – Open Chatbot App
  • AI Chat – Chatbot AI Assistant
  • Genie – AI Chatbot
  • AI Chatbot – Open Chat Writer

More on these fake ChatGPT’s can be found on Tom’s Hardware in an in-depth article, which includes how to use the legitimate ChatGPT.

It should go without saying that you should always get software from the source and if deals look too good to be true, they probably are.

Artificial Intelligence on a Real Planet.

With all that I have written about recently, I think it’s important to point out that I’m not against artificial intelligence. I’m concerned about how it is used and managed, which I also believe we all should be, down to the last ditch digger.

Artificial intelligence on existing systems are pretty interesting, but there is an implicit limit because they are hard-wired to have only 2 choices at the lowest levels. Bayesian logic allows for 2 choice prediction that emulates appearing more complicated, but at the lowest levels it’s a bunch of on/off switches being run automatically.

What we also have to understand is that computers are inferior to humans in being human. We are not meat computers, they are silicon humans we’re making in our image as best we can with what we have.

What we have has changed from mud, sticks and incantations to dc electricity and semiconductors that operate on a complicated light switch mechanism, like 2 ships communicating at night on a dark sea. And we’re going further even as we write software for the old systems – the new COBOL, maybe, that will haunt us for decades, perhaps centuries, when the rest of the world has moved on to whatever comes next.

I was prompted on this by checking out a new subscriber’s website to one of my blogs (thanks!) which, while from November last year, reminds us of what positive stuff is being done with AI, and what we can improve.

Where computers have not failed is also interesting to note: They do not fail at communicating with each other because of common protocols we established globally. All the computers doing the communication on the Internet are doing it in ways that the others understand, or they’re not on the Internet.

Of course, we’re not the only ones communicating. Elephants, as an example, seem to have very complex communication that includes seismic communication. The African Savannah Elephants use infrasound to communicate with other elephants across long distances, without needing updates or special cables from Apple.

They’re intelligent too, with social structures – enough so that studying them gives us hints about our own evolution.

It seems wherever we look on Earth these days, we’re seeing that other species are exhibiting intelligence, from the octopus to the dolphin, from the whale to the crow. It’s unlikely that they suddenly became intelligent. It’s more likely that they’ve been intelligent all along and we’ve not been intelligent enough to figure it out.

As a species, it’s questionable if we are that intelligent, but we have enough intelligent ones in the herd finding things.

So what role does artificial intelligence have in this context? It’s beginning to be used to communicate with other species on the planet.

“…In the bat chapter where I discuss the research of Yossi Yovel of Tel Aviv University, there’s a particular study in which his team monitored [nearly two] dozen Egyptian fruit bats for two and a half months and recorded their vocalizations. They then adapted a voice-recognition program to analyze [15,000 of] the sounds, and the algorithm correlated specific sounds with specific social interactions captured via videos—such as when two bats fought over food. Using this, the researchers were able to classify the majority of bats’ sounds. That is how Yovel and other researchers such as Gerry Carter of the Ohio State University have been able to determine that bats have much more complex language than we previously understood. Bats argue over food; they distinguish between genders when they communicate with one another; they have individual names, or “signature calls.” Mother bats speak to their babies in an equivalent of “motherese.” But whereas human mothers raise the pitch of their voices when talking to babies, mother bats lower the pitch—which elicits a babble response in the babies that learn to “speak” specific words or referential signals as they grow up. So bats engage in vocal learning…”

How Scientists Are Using AI to Talk to Animals“, Scientific American, Sophie Bushwick, February 7, 2023

Bats. This isn’t science fiction, this is science.

This sort of use of technology could well be what helps our species more than creating more lawyers or billionaires wanting to go to space, or to live on Mars, or launch cars into space.

Maybe this technology could help us make sense of our own planet, for ourselves, even before we consider colonizing other planets. It’s pretty clear we are only beginning to understand other intelligences on our own planet.

Whose Safe Space Is It Anyway?

Corporations have been creating “safe spaces” for themselves for a while, and while that can be read as either good or bad depending on how you feel about things, let’s just accept that as an objective truth.

Disney took things from the public domain and copyrighted their versions, making them as ubiquitous as their marketing – and then worked hard to close the door for others to do the same with their works which should have passed to the public domain.

The Sonny Bono Act, or Mickey Mouse Protection Act extended copyright to keep things from going into the public domain:

“…Following the Copyright Act of 1976, copyright would last for the life of the author plus 50 years (or the last surviving author), or 75 years from publication or 100 years from creation, whichever is shorter for a work of corporate authorship (works made for hire) and anonymous and pseudonymous works. The 1976 Act also increased the renewal term for works copyrighted before 1978 that had not already entered the public domain from 28 years to 47 years, giving a total term of 75 years.[3]

The 1998 Act extended these terms to life of the author plus 70 years and for works of corporate authorship to 120 years after creation or 95 years after publication, whichever end is earlier.[4] For works published before January 1, 1978, the 1998 act extended the renewal term from 47 years to 67 years, granting a total of 95 years.

This law effectively froze the advancement date of the public domain in the United States for works covered by the older fixed term copyright rules…”

Copyright Term Extension Act, Wikipedia, accessed on 16 May 2023.

Corporations acted in their own self-interest. Lawrence Lessig’s Free Culture was the first I read of it, but I don’t know that he was the first that noted it. They created a safe space for their copyrights while they had their roots in the public domain.

The world is full of other examples.

Bill Gates would dumpster dive and study code printouts, among other thing. As the famous founder of Microsoft, lots of people don’t seem to know that Microsoft didn’t start without understanding – and borrowing, if not buying – code from others. There’s nothing particularly shameful about it.

“The best way to prepare is to write programs, and to study great programs that other people have written. In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating systems.”

Bill Gates, Interview with Suzanne Lammers, 1986.

I think any programmer would disagree with the sentiment. Yet, the same Bill Gates who did that also wrote an open letter to hobbyists in 1976 that did not reflect that sentiment:

“…The feedback we have gotten from the hundreds of people who say they are using BASIC has all been positive. Two surprising things are apparent, however, 1) Most of these “users” never bought BASIC (less thank 10% of all Altair owners have bought BASIC), and 2) The amount of royalties we have received from sales to hobbyists makes the time spent on Altair BASIC worth less than $2 an hour.

Why is this? As the majority of hobbyists must be aware, most of you steal your software. Hardware must be paid for, but software is something to share. Who cares if the people who worked on it get paid?…”

An Open Letter To Hobbyists“, Bill Gates, cited in the New York Times archives.

Most people would say, “Well, he has a point.” And he did – in protecting a business model he was creating which kept people from being able to view the source code to learn from it. Was it a bad thing? A good thing? It doesn’t matter, it was a thing.

At the time, it was a bunch of scattered hobbyists before the Internet against a corporation that could afford lawyers and marketing. It was the wild, wild west of personal computing.

The above examples are 2 of many ‘negotiations‘ between the public and corporations, though with the increased political influence corporations have through lobbyism – and with money now being free speech – it’s hard to consider it a legitimate negotiation.

If you have 45 minutes, Pirates of Silicon Valley is worth watching.

The point is that corporations always do things like this, for better or worse and for better and worse. And with the emergence of artificial intelligence-like technologies, while the safe space of creators is being abstracted away into statistics. By extension, this also applies to the privacy of everyone’s data.

My thought is, the greater the wings, the more confident the bird should be where it roosts. If corporations are indeed made of individuals working toward common goals and are creating things, that’s great! But it should not come at the cost of competition, which is one of the founding principles of capitalism… which corporations tend to cite only when convenient.

“Do as we say. Not as we do.”

More Lawyers? Really?

We’re all guilty of looking at the world through our own lenses of experience. The person barely making ends meet while working 3 jobs in a thankless economy to support a family is not going to see things the same as a doctor or lawyer, as an example, particularly after they’ve done their internships.

The people who get quoted the most aren’t the majority. In fact, they’re usually a minority that live in a bubble, immune to most problems on the planet, and because of the fact that the bubble is sacred to them, they almost never venture outside.

CEOs live in a different world, blissfully unaware of the day to day issues of people who don’t live their lives. For some reason, these people are often glamorized yet they provide hints of their own biases at times.

Sundar Pichai, CEO of Google, recently demonstrated one. When talking about societal upheaval and jobs, he had an odd go-to but one that a CEO would be very comfortable with.


“…“I think it’ll touch everything we do,” Pichai said of A.I. in an interview with The Verge’s Nilay Patel published Friday. “I do think there are big societal labor market disruptions that will happen.”

But the tech chief thinks that A.I. could also make some jobs better, if it’s done right. He used the example of the legal profession, which some believe will be the most disrupted by A.I., and said that even with technological developments, the need for some skills and services will not be eliminated altogether.  

“So, A.I. will make the profession better in certain ways, might have some unintended consequences, but I’m willing to almost bet 10 years from now, maybe there are more lawyers.”…

Sundar Pichai, Google’s Sundar Pichai thinks A.I. will spur ‘big societal labor market disruptions’ but also make professions better, Prarthana Prakash, Fortune, May 12th 2023.

I’m not going to put words into his mouth, there’s no need. These are questions he’s likely primed himself for that will minimize the societal upheaval it will cause. He’s the CEO of Google. In 2022, Sundar Pichai made $226 million as CEO of Google, mainly in stock options. He’s vested in the success of Google, and the layoffs in January were… unfortunate for him, I suppose.

And we need more lawyers? Really? Are they planning to make things that more complicated and expensive? Or does he picture a future where lawyers will charge less money?

Given the nature of how disruptive some of the technologies being dubbed “AI” by the hype cycle are, I might be more interested to hear from collective bargaining organizations than a CEO of Google when it comes to such disruption.

His perspective is implicitly biased, he’s vested in a corporation whose technology interests are not necessarily in line with those of most of it’s users. He’s not a bad person, I’m not saying that. I’m saying what he is quoted as saying seems cavalier.

What I am saying is that someone who says, “We’ll have more lawyers” like it’s a good thing might not have thought things through beyond his bubble. Take it for what it’s worth.

There are a lot of people whose ways of life are at stake in all of this, and I’m not sure that they all want to be lawyers. I hope not, anyway. Justice is blind, they say.