When You Can’t Trust Voices.

Generative AI is allowing people to do all sorts of things, including imitating voices we have come to respect and trust over the years. In the most recent case of Sir David Attenborough, he greatly objects to it and finds it ‘profoundly disturbing’.

His voice is being used in all manner of ways.

It wasn’t long ago that Scarlet Johannson suffered such an insult that was quickly ‘disappeared’.

The difference here is that a man who has spent decades showing people the natural world has his voice being used in disingenuous ways, and it should give us all pause. I use generative artificial intelligence, as do many others, but there would be no way that I would even consider misrepresenting what I write or work on in the voice of someone else.

Who would do that? Why? It dilutes it. Sure, it can be funny to have a narration by someone like Sir David Attenborough, or Morgan Friedman, or… all manner of people… but to trot out their voices to misrepresent truth is a very grey area in an era of half-truths and outright lies being distributed on the Babel of the Internet.

Somewhere – I believe it was in Lessig’s ‘Free Culture’ – I had read that the UK allowed artists to control how their works were used. A quick search turned this up:

The Copyright, Designs and Patents Act 1988, is the current UK copyright law. It gives the creators of literary, dramatic, musical and artistic works the right to control the ways in which their material may be used. The rights cover: Broadcast and public performance, copying, adapting, issuing, renting and lending copies to the public. In many cases, the creator will also have the right to be identified as the author and to object to distortions of his work.

The UK Copyright Service

It would seem that something similar would have to be done with the voices and even appearance of people around the world – yet in an age moving toward artificial intelligence, where content has been scraped without permission, the only people who can actually stop doing this are the ones who are scraping the content.

The world of trusted humans is being diluted by untrustworthy humans.

The Dark Side of the AI.

It didn’t take as long as we expected. Last week, a former school athletic director got arrested for framing a principal.

Being a campaign year, I thought that most of the AI hijinx would revolve around elections around the world – and they are happening – but I didn’t think we’d see early adoption of AI in this sort of thing. And an athletic director, no less – not a title typically known for mastery of technology.

AI has a dark side, which a few of us have been writing. The Servitor does a good job of documenting what they coined as Dark ChatGPT, well worth a look. Any technology can be twisted to our own devices.

It’s not the technology.

It’s us.

Again.

Maybe the CEO of Google was right about a need for more lawyers.

DHS Artificial Intelligence Safety And Security Board Has Some Odd Appointments.

Now that we’ve seen that generative artificial intelligence can be trained ethically, without breaking copyright laws, the list of people to the DHS Artificial Intelligence Safety and Security Board seems less than ideal.

The Board is supposed to ‘advance AI’s responsible development and deployment’ (emphasis mine), yet some on that Board took shortcuts.

Shortcuts in relation to any national security issue seems like a bad thing.

Here’s the list.

There’s some dubious companies involved. The argument can be made – and it probably will – that the companies are a part of national infrastructure, but is it national infrastructure that controls the United States, or is it the other way around?

I don’t know that these picks are good or bad. I will say that there are some that, at least in the eyes of others, been irresponsible. That would fall under Demonstrated Unreliability.

“Free Speech” And Social Media.

I’ve seen plenty of folks talking about ‘First Amendment’ and ‘Freedom of Speech’ in the context of TikTok, as I saw on Facebook, as I saw on…

All the way back to AOL. Strangely, I don’t remember the topic on BBSs (Bulletin Board Systems), mainly because everyone on those generally understood the way things are.

As a moderator on websites in the early days of the Internet right up to WSIS, I heard it again and again. “You can’t restrict my freedom of speech!”

Social media platforms are private companies and are not bound by the First Amendment. In fact, they have their own First Amendment rights. This means they can moderate the content people post on their websites without violating those users’ First Amendment rights. It also means that the government cannot tell social media sites how to moderate content. Many state laws to regulate how social media companies can moderate content have failed on First Amendment grounds.

Most sites also cannot, in most cases, be sued because of users’ posts due to Section 230 of the federal Communications Decency Act.

Free Speech on Social Media: The Complete Guide“, Lata Nott, FreedomForum.

The link for the quote has a great article worth reading, because there are some kinds of speech that you can get in trouble for. No sense rewriting a good article.

So this idea about ‘free speech’ on any platform controlled by anyone other than yourself is incorrect. Wrong.

Once you don’t break the terms of service or laws in the country you’re in or the country where the platform is hosted (legally), you can say whatever you want. The principle of the freedom of speech is assumed by a lot of people because it’s in the interests of platforms to let people say whatever they want as long as it doesn’t impact their ability to do business – irritating other users, threatening them, etc.

Even your own website is subject to the terms and conditions of the host.

There’s a quote falsely attributed to Voltaire that people pass around, too: “To learn who rules over you, simply find out who you are not allowed to criticize.” Powerful words, thoughtful words, unfortunately expressed by someone who is… well, known for the wrong things.

It doesn’t seem to apply that much on social media platforms anyway. I have routinely seen people on Twitter griping about Twitter, on Facebook griping about Facebook… the only social media platform I haven’t seen it on is LinkedIn, but I imagine someone probably did there too.

This idea seems to come up at regular intervals. It could be a generational thing. In a world where we talk about what should be taught in schools, this is one of them.

Government interference in these platforms moderation could be seen as a First Amendment issue. With TikTok, there’s likely going to be a showdown over freedom of speech in that context, but don’t confuse it with the user’s first amendment rights. It’s strange that they might do that, too, because where ByteDance (the owning company) is based, they couldn’t sue their government. China’s not known for freedom of speech. Ask Tibet.

The second you find yourself defending a platform you don’t control, take a breath and ask yourself if you can’t just do the thing somewhere else. You probably should.

The Fediverse isn’t too different, except you can find a server with rules that work for you to connect to it.

The End Of Non-Compete.

The FTC banned non-competes agreements, and I wish that this had come a few decades earlier. Non-competes kept me from starting businesses and even working for competitors in the past, though more as a matter of honoring the agreement than any legal threat by a former employer.

When you do specialized work for companies, as an employee or a contractor, that non-compete agreement was always a pain. A silent tyranny.

Word around the water cooler always speculated that non-competes were unconstitutional (13th Amendment), but all the ‘legal experts’ around the water cooler were not people who would pay my lawyers.

The 13th Amendment provides that “neither slavery nor involuntary servitude . . . shall exist.” If the protection against involuntary servitude means anything for workers, it means that they have a right to leave their jobs to seek other employment. Indeed, one of the few effective bargaining chips for workers who wish to improve their wages and working conditions is the ability to threaten to quit if one’s demands are not met. Members of the Reconstruction Congress understood that employee mobility was essential to freedom from involuntary servitude. Enslaved people obviously lacked the ability to leave their masters. Even after they were no longer enslaved, without mobility, people freed from slavery would have been forced to work for their former masters. The Reconstruction Congress enforced the 13th Amendment with the 1867 Anti-Peonage Act, prohibiting employers from requiring their workers to enter into contracts that bind them to their employers. Non-compete clauses have similar effects because they prohibit workers from leaving their jobs to find other similar jobs.

Non-Compete Clauses and the 13th Amendment: Why the New FTC Rule Is Not Only Good Policy but Constitutionally Mandated“, Rebecca Zietlow, JuristNews, Feb 13th, 2023

Employers are stating that they are concerned about trade secrets and so forth, which on the surface seems legitimate – but that’s what the confidentiality agreements are for. Also, a lack of non-competes means that the value of employees to employers is higher. If you don’t want your people to go work for a competitor, don’t let them go. So many companies that I did leave were terrible at listening to employee concerns – not about themselves, but about the company, making jobs unnecessarily political.

Navigating office politics is… tiresome. In fact, I left one company simply because I got tired of the DBA who kept screwing up but was seemingly protected by the business team because he drank with them. He squandered quite a bit of their money shoring up his position by insisting on databases he knew when open source databases could have done the same job much more cost-effectively. I had a non-compete, so I didn’t even bother working within that company’s niche.

And I could have. I had offers. Yet I just didn’t feel like dealing with a vengeful bit of litigation, and that business team could be vengeful. I saw it a few times.

That’s just one story.

Most of these agreements protect employers, and that’s fair to the extent that any work done for them is basically a commissioned work in the realm of software engineering.

I hope this isn’t screwed up, I hope that the Chamber of Commerce appeal fails – not that I want to screw over employers, but because employees shouldn’t get screwed over when they’re stuck in a dead end and have built up expertise in a niche. I’m hoping this gives a better balance.

At any point in the future, I could be either an employer or employee.

When Is An Algorithm ‘Expressive’?

Yesterday, I was listening to the webinar on Privacy Law and the United States First Amendment when I heard that lawyers for social networks are claiming both that they have free speech as a network as a speaker, as well as claiming not to be the speaker and claiming they are simply are presenting content users have expressed under the Freedom of Speech. How the arguments were presented I don’t know, and despite showing up for the webinar I am not a lawyer1. The case before the Supreme Court was being discussed, but that’s not my focus here.

I’m exploring how it would be possible to claim that a company’s algorithms that impact how a user perceives information could be considered ‘free speech’. I began writing this post about that and it became long and unwieldy2, so instead I’ll write a bit about the broader impact of social networks and their algorithms and tie it back.

Algorithms Don’t Make You Obese or Diabetic.

If you say the word ‘algorithm’ around some people, their eyes immediately glaze over. It’s really not that complicated; a repeatable thing is basically an algorithm. A recipe when in use is an algorithm. Instructions from Ikea are algorithms. Both hopefully give you what you want, and if they don’t, they are ‘buggy’.

Let’s go with the legal definition of what an algorithm is1. Laws don’t work without definitions, and code doesn’t either.

Per Cornell’s Legal Information Institute, an algorithm is:

“An algorithm is a set of rules or a computational procedure that is typically used to solve a specific problem. In the case of Vidillion, Inc. v. Pixalate Inc. an algorithm is defined as “one or more process(es), set of rules, or methodology (including without limitation data points collected and used in connection with any such process, set of rules, or methodology) to be followed in calculations, data processing, data mining, pattern recognition, automated reasoning or other problem-solving operations, including those that transform an input into an output, especially by computer.” With the increasing automation of services, more and more decisions are being made by algorithms. Some examples are; criminal risk assessments, predictive policing, and facial recognition technology.”

By this definition and perhaps in it’s simplest form, adding two numbers is an algorithm, which also fits just about any technical definition out there. That’s not at issue.

What is at issue in the context of social networks is how algorithms impact what we view on a social networking website. We should all understand in the broad strokes that Facebook, Twitter, TikTok and their ilk are in the business of showing people what they want to see, and to do this they analyze what people view so that they can give people what they want.

Ice cream and brownies for breakfast, everyone!

Let’s agree every individual bit of content you see that you can act on, such as liking or re-transmitting, is a single item. Facebook sees you like ice cream, Facebook shows you posts of ice cream incessantly. Maybe you go out and eat ice cream all the time because of this and end up with obesity and diabetes. Would Facebook be guilty of making you obese and diabetic?

Fast food restaurants aren’t considered responsible for making people obese and diabetic. We have choices about where we eat, just as we have choices about what we do with our lives outside of a social network context. Further, almost all of these social networks give you options to not view content, from blocking to reporting to randomly deleting your posts and waving a finger at you for being naughty – without telling you how.

Timelines: It’s All A Story.

As I wrote elsewhere, we all choose our own social media adventures. Most of my social networks are pretty well tuned to feed me new things to learn every day, while doing a terrible job of providing me information on what all my connections are up to. It’s a real estate problem on social network sites, and not everyone can be in that timeline. Algorithms pick and choose, and if there are paid advertisements to give you free access, they need space too.

Think of it all as a personal newspaper. Everything you see is picked for you based on what the algorithms decide, and yet all of that information is competing to get into your eyeballs, maybe even your brain. Every story is shouting ‘pick me! pick me!’ with catchy titles, wonderful images, and maybe even some content – because everyone wants you to click to their website so you can hammer them with advertising.4

Yet when we step back from those individual stories, the social networking site is curating things in a chronological order. Let’s assume that what it thinks you like to see the most is at the top and it goes down in priority based on what the algorithms have learned about you.

Now think of each post as a page in a newspaper. What’s on the front page affects how you perceive everything in the newspaper. Unfortunately, because it’s all shoved into a prioritized list for you, you get things that are sometimes in a strange order, giving a weird context.

Sometimes you get stray things you’re not interested in because the algorithms have grouped you with others. Sometimes the priority of what you last wrote about will suddenly have posts related to it covering every page in that newspaper.

You might think you’re picking your own adventure through social media, but you’re not directly controlling it. You’re randomly hitting a black box to see what comes out in the hope that you might like it, and you might like the order that it comes in.

We’re all beta testers of social networks in that regard. They are constantly tweaking algorithms to try to do better, but doing better isn’t necessarily for you. It’s for them, and it’s also for training their artificial intelligences more than likely. It’s about as random as human interests are.

Developing Algorithms.

Having written software in various companies over the decades, I can tell you that if there’s a conscious choice to express something with them, to get people to think one way or the other (the point of ‘free speech’), it would have to be very coordinated.

Certain content would have to be weighted as is done with advertising. Random content churning through feeds would not fire things off with the social networking algorithms unless they manually chose to do so across users. That requires a lot of coordination, lots of meetings, and lots of testing.

It can be done. With advertising as an example, it has been done overtly. Another example is the last press against fake news, which has attempted to proactively check content with independent fact checkers.

Is that free speech? Is that freedom of expression of a company? If you look at this case again, you will likely draw your own conclusions. Legally, I have no opinion because I’m not a lawyer.

But as a software engineer, I look at it and wonder if this is a waste of the Court’s time.

  1. It should be in the interest of software engineers and others about the legal aspects of what we have worked on and will work on. Ethics are a thing. ↩︎
  2. It still is, and I apologize if it’s messy. This is a post I’ll likely have to revisit and edit. ↩︎
  3. Legal definitions of what an algorithm is might vary around the world. It might be worth searching for a legal definition where you are. ↩︎
  4. This site has advertising. It doesn’t really pay and I’m not going to shanghai viewers by misrepresenting what I write. It’s a choice. Yet to get paid for content, that’s what many websites do. If you are here, you’re appreciated. Thanks! ↩︎

Better Business, Privacy

When I read, “Better Business TT launches app, website: Protecting customers from rip-off“, I laughed a bit. It’s not a bad idea, so let me explain.

The article, on the Internet, doesn’t link to the website or app.

It doesn’t really say much about the application other than it being a concept borrowed, with attribution, to successful Angi (formerly Angie’s List) online directory in the United States. That’s the way of these things because it really is dependent on community buy-in, and so the article and content related to this should be sticky. That article is not sticky. It was coated in butter and sent out the door.

The website name might make people with American exposure confused it with the Better Business Bureau, which it is not and not even near a local equivalent.

Having mastered the art of search engines long ago, even when people were still fighting with the blinking lights on VCRs, I found the BetterBusinessTT website. Again, pretty generic, and it could be early on and looking for an organic ‘boom’ to happen, but it needs more oomph in that regard.

And again, it’s not a bad idea. It’s a good idea, though with an estimated population of about 1.5 million with a lot of economic disparity, I don’t know that it will beat out personal recommendations. The security aspect, mentioned in the article, though, was very funny and the reason I wrote this.

Where in the context of lack of the Data Privacy Laws in Trinidad and Tobago, with recent data breaches, would anyone consider their data to be secure in this country? And why then can someone not just tamper with the website so that they can get sales for their services and products?

This is not against BetterBusinessTT. Not at all. It’s about knowing where the laws of liability land on information in Trinidad and Tobago.

Information has been the ‘new oil’ for over 20 years at this point, and it looks like WASA may be in charge of that these days.

Political And AI Intrigue In Social Media.

I normally don’t follow politics because politics doesn’t really follow me – it tends to stalk me instead. Yet today, with social media in the headline, I paid attention – because it’s not just politics involved. There’s artificial intelligence as well, or what is accused of it.

From the first article:

A US federal judge has limited the Biden administration’s communications with social media companies which are aimed at moderating their content.

In a 155-page ruling on Tuesday, judge Terry Doughty barred White House officials and some government agencies from contacting firms over “content containing protected free speech”.

It is a victory for Republicans who have accused officials of censorship.

Democrats said the platforms have failed to address misinformation.

The case was one of the most closely-watched First Amendment battles in the US courts, sparking a debate over the government’s role in moderating content which it deemed to be false or harmful…

Biden officials must limit contact with social media firms“, BBC News, Annabelle Liang, 5th July, 2023.

By itself, it’s pretty damning for the Democrats, who like the Republicans, aren’t my favorite people in the world. It isn’t an either/or proposition, but it’s usually simplified to that so that both sides keep reading for advertising.

Now here’s the second article.

Evidence of potential human rights abuses may be lost after being deleted by tech companies, the BBC has found.

Platforms remove graphic videos, often using artificial intelligence – but footage that may help prosecutions can be taken down without being archived.

Meta and YouTube say they aim to balance their duties to bear witness and protect users from harmful content.

But Alan Rusbridger, who sits on Meta’s Oversight Board, says the industry has been “overcautious” in its moderation…

AI: War crimes evidence erased by social media platforms“, BBC Global Disinformation Team, Jack Goodman and Maria Korenyuk, 1 June 2023.

The artificial intelligence angle is from a month ago. The political angle dealing with Democrats and Republicans (oh my!) is today, because of the Federal Judge’s ruling. Both deal with content being removed on social media.

The algorithms on social media removing content related to Ukraine is not something new when it comes to Meta, because yours truly spent time in Facebook jail for posting an obvious parody of a Ukrainian tractor pulling the Moskov – before it was sunk. It labeled it as false information, which of course it was – it was a parody, and any gullible idiot who thought a Ukrainian tractor was pulling the Moskov deserves to be made fun of.

Clearly, the Moskov would need 2 Ukrainian tractors to pull it. See? Again, comedic.

These stories are connected in that the whole idea of ‘fake news’ and ‘trusted information’ has been politicized just about everywhere, and by politicized I also mean polarized. Even in Trinidad and Tobago, politicians use the phrases as if they are magical things one can pull out of… an orifice.

Algorithms, where they are blaming AI, are injecting their own bias by removing and leaving some content. Is some of this related to the ruling about Biden officials? I imagine it is. How much of a part of it is debatable – yet, during Covid, people were spreading a lot of fake news that worked against the public interests related to health.

The political angle had a Federal Court intervene. No such thing has happened with the artificial angle. That’s disturbing.

Looks like getting beyond Code 2.0 is becoming more important, or more late. What you see in the echo chambers of social media are just red dots, shining on the things others want us to see, and not necessarily the right things.

Exploring Beyond Code 2.0: Into A World of AI.

It’s become a saying on the Internet without many people understanding it: “Code is Law”. This is a reference to one of the works of Lawrence Lessig, revised already since it’s original publication.

Code Version 2.0 dealt with much of the nuances of Law and Code in an era where we are connected by code. The fact that you’re reading this implicitly means that the Code allowed it.

Here’s an example that weaves it’s way throughout our society.

One of the more disturbing things to consider is that when Alexis de Tocqueville wrote Democracy in America 1, he recognized the jury as a powerful mechanism for democracy itself.

“…If it is your intention to correct the abuses of unlicensed printing and to restore the use of orderly language, you may in the first instance try the offender by a jury; but if the jury acquits him, the opinion which was that of a single individual becomes the opinion of the country at large…”

Alexis de Tocqueville, Volume 1 of Democracy in America, Chapter XI: Liberty of the Press In the United States (direct link to the chapter within Project Gutenberg’s free copy of the book)

In this, he makes the point that public opinion on an issue is summarized by the jury, for better and worse. Implicit in that is the discussion within the Jury itself, as well as the public opinion at the time of the trial. This is indeed a powerful thing, because it allows the people to decide instead of those in authority. Indeed, the jury gives authority to the people.

‘The People’, of course, means the citizens of a nation, and within that there is discourse between members of society regarding whether something is or is not right, or ethical, within the context of that society. In essence, it allows ethics to breathe, and in so doing, it allows Law to be guided by the ethics of a society.

It’s likely no mistake that some of the greatest concerns in society stem from divisions in what people consider to be ethical. Abortion is one of those key issues, where the ethics of the rights of a woman are put into conflict with the rights of an unborn child. On either side of the debate, people have an ethical stance based on their beliefs without compromise. Which is more important? It’s an extreme example, and one that is still playing out in less than complimentary ways for society.

Clearly no large language model will solve it, since the large language models are trained with implicitly biased training models and algorithms which is why they shouldn’t be involved, and this would likely go for general artificial intelligences of the future. Machine learning, or deep learning, learns from us, and every learning model is developed by it’s own secret jury whose stewed biases may not reflect the whole of society.

In fact, they would reflect a subset of society that is as disconnected from society as the companies that make them, since the company hires people based on it’s own values to move toward their version of success. Companies are about making money. Creating value is a very subjective thing for human society, but money is it’s currency.

With artificial intelligence being involved in so many things and with them becoming more and more involved, people should at the least be concerned:

  • AI-powered driving systems are trained to identify people, yet darker shades of humanity are not seen.
  • AI-powered facial recognition systems are trained on datasets of facial images. The code that governs these systems determines which features of a face are used to identify individuals, and how those features are compared to the data in the dataset. As a result, the code can have a significant impact on the accuracy and fairness of these systems, which has been shown to have an ethnic bias.
  • AI-powered search engines are designed to rank websites and other online content according to their relevance to a user’s query. The code that governs these systems determines how relevance is calculated, and which factors are considered. As a result, the code can have a significant impact on the information that users see, and therefore what they discuss, and how they are influenced.
  • AI-powered social media platforms are designed to connect users with each other and to share content. The code that governs these platforms determines how users are recommended to each other, and how content is filtered and ranked. As a result, the code can have a significant impact on the experiences of users on these platforms – aggregating into echo chambers.

We were behind before artificial intelligence reared it’s head recently with the availability of large language models, separating ourselves in ways that polarized and made compromise impossible.

Maybe it’s time for Code Version 3.0. Maybe it’s time we really got to talking about how our technology will impact society beyond a few smart people.

1 This was covered in Volume 1 of ‘Democracy in America‘, available for free here on Project Gutenberg.

Artificial Extinction.

The discussion regarding artificial intelligence continues, with the latest round of cautionary notes making the rounds. Media outlets are covering it, like CNBC’s “A.I. poses human extinction risk on par with nuclear war, Sam Altman and other tech leaders warn“.

Different versions of that article written by different organizations are all over right now, but it derives from one statement on artificial intelligence:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Center for AI Safety, Open Letter, undated.

It seems a bit much. Granted, depending on how we use AI we could be on the precipice of a variety of unpredictable catastrophes, and while pandemics and nuclear war definitely poses direct physical risks, artificial intelligence poses more indirect risks. I’d offer that can make it more dangerous.

In the context of what I’ve been writing about, we’re looking at what we feed our heads with. We’re looking at social media being gamed to cause confusion. These are dangerous things. Garbage in, Garbage out doesn’t just apply to computers – it applies to us.

More tangibly, though, it can adversely impact our way(s) of life. We talk about the jobs it will replace, with no real plan on how to employ those displaced. Do people want jobs? I think that’s the wrong question that we got stuck with in the old paint on society’s canvas. The more appropriate question is, “How will people survive?”, and that’s a question that we overlook because of the assumption that if people want to survive, they will want to work.

Is it corporate interest that is concerned about artificial intelligence? Likely not, they like building safe spaces for themselves. Sundar Pichai mentioned having more lawyers, yet a lawyer got himself into trouble when he used ChatGPT to write court filings:

“The Court is presented with an unprecedented circumstance,” Castel wrote in a previous order on May 4. “A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases… Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”

The filings included not only names of made-up cases but also a series of exhibits with “excerpts” from the bogus decisions. For example, the fake Varghese v. China Southern Airlines opinion cited several precedents that don’t exist.”

Lawyer cited 6 fake cases made up by ChatGPT; judge calls it “unprecedented”“, Jon Brodkin, ArsTechnica, May 30th 2023

It’s a good thing there are a few people out there relying on facts instead of artificial intelligence, or we might stray into a world of fiction where those that control the large language models and general artificial intelligences that will come later will create it.

Authoritarian governments could manipulate machine learning and deep learning to assure everyone’s on the same page in the same version of the same book quite easily, with a little tweaking. Why write propaganda when you can have a predictive text algorithm with a thesaurus of propaganda strapped to it’s chest? Maybe in certain parts of Taliban controlled Afghanistan, it will detect that the user is female and give it a different set of propaganda, telling the user to stay home and stop playing with keyboards.

It’s not hard to imagine all of this. It is a big deal, but in parts of the world like Trinidad and Tobago, you don’t see much about it because there’s no real artificial intelligence here, even as local newspaper headlines indicate real intelligence in government might be a good idea. The latest article I found on it in local newspapers online is from 2019, but fortunately we have TechNewsTT around discussing it. Odd how that didn’t come up in a Google search of “AI Trinidad and Tobago”.

There are many parts of the world where artificial intelligence is completely off the radar as people try to simply get by.

The real threat of any form of artificial intelligence isn’t as tangible as nuclear war or pandemics to people. It’s how it will change our way(s) of life, how we’ll provide for families.

Even the media only points at that we want to see, since the revenue model is built around that. The odds are good that we have many blind spots that the media doesn’t show us even now, in a world where everyone who can afford it has a camera and the ability to share information with the world – but it gets lost in the shuffle of social media algorithms if it’s not something that is organically popular.

This is going to change societies around the globe. It’s going to change global society, where the access to large language models may become as important as the Internet itself was – and we had, and still have, digital divides.

Is the question who will be left behind, or who will survive? We’ve propped our civilizations up with all manner of things that are not withstanding the previous changes in technology, and this is a definite leap beyond that.

How do you see the next generations going about their lives? They will be looking for direction, and presently, I don’t know that we have any advice. That means they won’t be prepared.

But then, neither were we, really.