Gaming The Medium

Even as we paint on society’s canvas, society paints on our individual canvases, and in this modern world of the Internet, social media and games, there’s a lot of paint being thrown around. Our world changes us, we change our world.

It’s not all as pretty as staged videos on Instagram, TikTok and Facebook reels, where ‘influencers’ do their best to find attractive red dots for people to chase. It’s in their interest. Before the Internet, it was broadcast media, but now with social media there is an increasingly large illusion of being able to interact when we might just be interacting with some algorithms attached to a dictionary.

Algorithms, though, carry dangers.

Via CuriosityGuide.

The video outlines some of what has been happening that isn’t good.

Algorithms, though, are important and can be used for good. We don’t see that as much as we should, largely because the wide swath of algorithms seem to be at the least questionable in whether they are good or not. That questionability comes from what we all want to see from the world and what cost we wish to pay for it – or, in the case of Internet trolls, the cost which we wish to have others pay for the world they want to see. I have more to write about trolls, but not yet.

What do we want? Before we figure out who we are, we seem to be told who we need to be. We mimic behaviors as children, and we grow within the framework supplied by our environments – rewards and punishments are set. We begin playing the game. In an environment, or, subjectively, an anti-environment.

Life In the Anti-Environment: Learning How To Play is an interesting paper by Adam Pugen in this regard – you can find the PDF of the paper here. It’s focused on video games, yet much of what is in there could apply to social media since the world is increasingly contrived and served through flat screens. This contrivance has been noted and mocked by more than one person. This German artist is a wonderful example, mocking instagram photos.

In any game, there are things that are possible and things that are less possible. One of the more common real world games, a lottery will sell us on the fact that there is a possibility to win despite there being a extremely low probability. The lottery has the distinction of being forced to be honest about the odds, but I have yet to see that honesty in the advertising for a lottery. What do you spend, what do you get? Most people see spending a few dollars every week over the course of their lifetime a worthwhile risk – otherwise there would be no lottery.

The game environment is simply defined. Enter the world of multiplayer games, which connect people through the internet and allow them to interact within certain guidelines. People, of course, find the loopholes and some enjoy the anonymous trolling aspect since they are faceless names and avatars. Others try to play the game by plodding through, others pay to get ahead, all depending on the game and how it is set up. If that doesn’t sound like a metaphor for modern social media, I don’t know what is.

All around the world, people are playing the social media game. How one ‘wins’ is dependent on how one views success, just like everyone else, but since social media is attached to real life more closely than other games there is the financial aspect that is quite real for the majority of the planet. How one loses, implicitly, is by not winning.

Now that we have large language models and the promises of artificial intelligence making things so much better, the game is more complicated.

If money is how we measure success, there are billions of people losing. We could change how we could measure success, or we could change the odds. Right now, the odds seem to be going the wrong way. There has to be some middle ground between tossing out participation trophies and a few winners taking all.

Ideas? discontinuing Twitter Auto-Share

While I was scheduling a post for I noticed that the auto-tweet functionality was no longer being done by

Of course I looked into it, not because I’m a fan of Twitter – I wasn’t before Musk took it over and began breaking everything – but because it is an avenue that at least some people I interact with check in at. The auto-share was often a way to let people know I was still alive.

Why is the auto-share being turned off on Costs, of course.

“In early April, we experienced an unexpected suspension of our Twitter API access. This access is what powers Jetpack Social, which in turn helps you automatically share your blog posts to Twitter. Though the service was restored that same day, it turns out that there were bigger changes looming on the horizon. 

Twitter recently notified Automattic that it was dramatically changing the terms and pricing of the Twitter API. The cost increase is prohibitive for us to absorb without passing a significant price increase along to you, and we don’t see that as an option. We have attempted to negotiate a path forward, but haven’t been able to reach an agreement in time for Twitter’s May 1 cutoff. 

Given that, we have decided to discontinue using the Twitter API.”

Why Twitter Auto-Sharing Is Coming to an End“, Blog, accessed on April 19th, 2023.

It went on to suggest checking out Tumblr, so I dusted off the old Tumblr account – and content from both and should be showing up here now.

Having never really used Tumblr, I expect there will be a learning curve involved, so please pardon me while I break things.

AI: Standing on the Shoulders of Technology, Seeking Humanity

“When the mob governs, man is ruled by ignorance; when the church governs, he is ruled by superstition; and when the state governs, he is ruled by fear. Before men can live together in harmony and understanding, ignorance must be transmuted into wisdom, superstition into an illumined faith, and fear into love.”

Manly P. Hall, The Secret Teachings of All Ages.

It’s almost impossible to keep up with all that is going on related to discussion on what’s being marketed as artificial intelligence, particularly with a lot of speculation on how it will impact our lives.

Since the late 1970s, we evolved technology from computers to personal computers to things we carry around that we still call ‘phones’ although their main purposes do not seem to revolve around voice contact. In that time, we’ve gone from having technology on a pedestal that few could reach to a pedestal most of humanity can reach.

It has been a strange journey so far. If we measure our progress by technology, we’ve been successful. That’s a lot like measuring your left foot with your right foot, though, assuming you are equally equipped. If we measure success fiscally and look at the economics of the world, a few people have gotten fairly rich at the expense of a lot of people. If we measure it in knowledge access, more people have access to knowledge than any other time on the planet – but it comes with a severe downside of a lot of misinformation out there.

We don’t really have a good measure of the impact of technology in our lives because we don’t seem to think that’s important outside of technology, yet we have had this odd tendency in my lifetime to measure progress with technology. At the end of my last session with my psychologist, she was talking about trying to go paperless in her office. She is not alone.

It’s 2023. Paperless offices was one of the technological promises made in the late 1980s. That spans about 3 decades. In that same period, it seems that the mob has increasingly governed, superstition has governed the mob, and the states have increasingly tried to govern. It seems as a whole, despite advances in science and technology, we, the mob, have become more ignorant, more superstitious and more fearful. What’s worse, our attention spans seem to have dropped to 47 seconds. Based on that, many people have already stopped reading because of ‘TLDR’.

Into all of this, we now have artificial intelligence to contend with:

…Some of the greatest minds in the field, such as Geoffrey Hinton, are speaking out against AI developments and calling for a pause in AI research. Earlier this week, Hinton left his AI work at Google, declaring that he was worried about misinformation, mass unemployment and future risks of a more destructive nature. Anecdotally, I know from talking to people working on the frontiers of AI, many other researchers are worried too…

HT Tech, “AI Experts Aren’t Always Right About AI

Counter to all of this, we have a human population that clearly are better at multiplying than math. Most people around the world are caught up in their day to day lives, working toward some form of success even as we are inundated with marketing, biased opinions parading around as news, all through the same way we are now connected to the world.

In fact, it’s the price we pay, it’s the best price Web 2.0 could negotiate, and if we are honest we will acknowledge that at best it is less than perfect. The price we pay for it is deeper than the cost we originally thought and may even think now. We’re still paying it and we’re not quite sure what we bought.

“We are stuck with technology when what we really want is just stuff that works.”

Douglas Adams, The Salmon of Doubt.

In the late 1980s, boosts in productivity were sold to the public as ‘having more time for the things you love’ and variations on that theme, but that isn’t really what happened. Boosts in productivity came with the focus in corporations so that the more you did, the more you had to do. Speaking for myself, everyone hired for 40 hour work weeks but demanded closer to 50. Sometimes more.

Technology marketing hasn’t been that great at keeping promises. I write that as someone who survived as a software engineer with various companies over the decades. Like so many things in life, the minutiae multiplied.

“…Generative AI will end poverty, they tell us. It will cure all disease. It will solve climate change. It will make our jobs more meaningful and exciting. It will unleash lives of leisure and contemplation, helping us reclaim the humanity we have lost to late capitalist mechanization. It will end loneliness. It will make our governments rational and responsive. These, I fear, are the real AI hallucinations and we have all been hearing them on a loop ever since Chat GPT launched at the end of last year…”

Naomi Klein, “AI Machines Aren’t ‘Hallucinating’. But Their Makers Are

There was a time when a software engineer had to go from collecting requirements to analysis to design to coding to testing to quality assurance to implementation. Now these are all done by teams. They may well all be done by versions of artificial intelligence in the future, but anyone who has dealt with clients first hand will tell you that clients are not that great at giving requirements, and that has been roled into development processes in various ways.

Then there is the media aspect, where we are all media tourists that are picking our social media adventures, creating our own narratives from what social media algorithms pick for us. In a lot of ways, we have an illusion of choice when what we really get are things that algorithms decide we want to see. That silent bias also includes content paywalled into oblivion, nevermind all that linguistic bias where we’re still discovering new biases.

Large Language Models like ChatGPT, called artificial intelligences with a degree of accuracy, have access to information that may or may not be the same that we may have in our virtual caves. They ‘learn’ faster, communicate faster and perhaps more effectively, but they lack one thing that would make them fail a real Turing test: Being human.

This is not to say that they cannot fake it convincingly by using Bayesian probability to stew our biases into something we want to read. We shouldn’t be too surprised, we put stuff in, we get stuff out, and the stuff we get out will look amazingly like stuff we put in. It is a step above a refrigerator in that we put in ingredients and we get cooked meals out, but just because a meal tastes good doesn’t mean it’s nutritious.

“We’re always searching, but now we have the illusion we’re finding it.”

Dylan Moran, “Dylan Moran on sobriety, his childhood, and the internet | The Weekly | ABC TV + iview

These stabs at humanity with technology are becoming increasingly impressive. Yet they are stabs, and potentially all that goes with stabs. The world limited to artificial intelligences can only make progress within the parameters and information that we give to them. They are limited, and they are as limited as we are, globally, biases and all. No real innovation happens beyond those parameters and information. It does not create new knowledge, it simply dresses up old knowledge in palatable ways very quickly, but what is palatable now may not be so next year. Or next month.

If we were dependent on artificial intelligences in the last century, we may not have had many of the discoveries we made. The key word, of course, is dependent. On the other hand, if we understood it’s limitations and incentivized humanity to add to this borgish collective of information, we may have made technological and scientific progress faster, but… would we have been able to keep up with it economically? Personally?

We’re there now. We’re busy listening to a lot of billionaires talk about artificial intelligences as if billionaires are vested in humanity. They’re not. We all know they’re not, some of us pretend they are. Their world view is very different. This does not mean that it’s wrong, but if we’re going to codify an artificial intelligence with opinions somehow, it seems we need more than billionaires and ‘experts’ in such conversations. I don’t know what the solution is, but I’m in good company.

The present systems we have are biased. It’s the nature of any system, and the first role of a sustainable system is making sure it can sustain itself. There are complicated issues related to intellectual property that can diminish new information being added to the pool balanced with economic systems that, in my opinion, should also be creating the possibility of a livelihood for those who do create and innovate not just in science and technology, but advance humanity in other ways.

I’m not sure what the answers are. I’m not even sure what the right questions are. I’m fairly certain the present large language models don’t have them because we have not had good questions and answers yet to problems affecting us as a species.

I do know that’s a conversation we should be having.

What do you think?

47 Seconds.

red dotFair warning: my writing typically takes more than 47 seconds to read.

It’s science. Attention spans are shrinking:

…Unfortunately, all too many of us are having “squirrel” days, according to Dr. Gloria Mark, a professor of informatics at the University of California, Irvine, who studies how digital media impacts our lives. In her new book, “Attention Span: A Groundbreaking Way to Restore Balance, Happiness and Productivity,” Mark explained how decades of research has tracked the decline of the ability to focus.

“In 2004, we measured the average attention on a screen to be 2½ minutes,” Mark said. “Some years later, we found attention spans to be about 75 seconds. Now we find people can only pay attention to one screen for an average of 47 seconds.”…

I’m not that surprised. There was this idea somewhere in the past that being constantly connected to other people was a great idea, but that idea also came with constant interruption.

As a Gen X sort of person, I’ve observed it anecdotally and have rebelled against it in workplaces, where younger generations in technology chose various chat technologies instead of email. We’d finally gotten the deluge of emails under control when that happened, where sending and receiving emails caused a drag on personal productivity. We controlled it with checking our emails only a few times a day.

Then some idiots decided that immediate conversations all the time with chat was a good thing. Some people seemed to thrive on it, some people didn’t, but one thing I noted as one of the rebels was that my work always got done while the people constantly chatting with each other never met deadlines. It’s anecdotal, but… now it’s been shown that our attention spans are indeed shortening, and if you have to take 25 minutes or more to refocus on what you’re doing after each interruption, your productivity is unlikely to be as good as it could be.

In our personal lives, social media has also gone from blog posts which have pandered to the shorter attention spans to people watching short videos on Facebook or TikTok or… whatever social media. Some think it’s great, many were born into it and don’t know the joy of not being interrupted. My grounding in the pre-Internet era, when we time-shared computers like real-estate, gave me that.

Maybe that’s why I stare at all the red dots running around and wonder why people of even my generation don’t pause and start focusing more on what’s important. The trouble is, we all think different things are important. That’s why when I wrote a brief note to Gen Z and Millenials I pointed out that we, indeed, didn’t start the fire. Yes, we failed to put things out, and yes, we did have a hand in screwing things up. There are lessons on why we screwed these things up.

But that would take more than 47 seconds to explain.

You folks need to work on that.

Normalizing The Mob.

_web_cyberPrintingPressI was glancing around social media when I saw NPR’s, “Hard times Are Ahead for news sites and social media. Is this the end of Web 2.0?“:

…”The news industry didn’t really have a profit model other than trying to get eyeballs and earn digital advertising revenue,” said Courtney Radsch, who studies technology and media at UCLA. “But what we saw is that the tech platforms, specifically Google and Facebook, ended up controlling that digital advertising infrastructure.”…

I suppose now that NPR has caught up to reality on this, it’s time to beat it with an old printing press. It’s been a problem for at least one decade, perhaps two, and it impacts anyone trying to create content on the Internet. The more agile companies and individuals have been good at monetizing trends in what the mob wants, creating a digital Vox Populi which can be outright… outrageous.

A few days ago I saw a reel of a lesbian manscaping a sheep because it was popular. To be fair, I didn’t realize that this was necessary or had a technique involved, but in less than a minute I was taught a masterclass in shaving a male sheep’s nether regions… which, honestly, I could have lived without, but it was so outrageous I simply watched and said, “What an example of outrageous content”. I’m sure it has a niche market, but I am not the niche. It just popped up in my feed.

It goes further than that though.

The regular ‘media’ has become just as outrageous, with Tucker Carlson’s nether regions being shaved by Fox News, after being outed to having opinions different than those he expressed. I imagine he really didn’t have his nether regions shaved – I do not want to know – but he did get fired, which for a talking head is pretty much the same. That’s where these little economic bubbles come in, where Tucker Carlson likely made sure he made enough money for as long as he could selling people what they wanted even if it had nothing to do with truth. It’s marketing. And wherever he lands, I’m sure he’ll have a fluffy landing, perhaps made from the wool of the nether regions of a male sheep.

He’s going to be ok. The people he fleeced with their attention to his misdirection will range from upset to simply waiting for the next Great Faux Hope. That’s the way of media these days. You don’t have to tell anything even resembling the truth, you simply have to cash in fast enough to live the rest of your life sipping mai-tais. It helps if you have some truth in it so it’s defensible, but that is no longer necessary. Media has become fiction, which should be irritating fiction writers everywhere.

The news used to be pretty boring in the last century, but it was necessary to understand the world. Now, it’s impossible to understand the world because the people who distribute facts without embellishing are not as interesting to people, with the exception of comedians who have become this age’s philosophers. Thank you, George Carlin, wherever you are.

What’s happened with ‘web 2.0’ is what was bound to happen: the economy surrounding it is normalizing as the hype it used to have is being eaten voraciously by large language models trained on the hype-fest of Web 2.0, probably full of pithy marketing slogans whose psychology makes them powerful red dots, eating the most valuable resource that an individual has: time.

Now that resource is being used by ChatGPT, where they are given the illusion of creating content that amuses them. That hype will eventually fall away too since what the language models was trained on was content available on the web, which is full of so much psychological marketing that it’s similar to masturbation in giving us the results we want without involving another human.

The content of Web 2.0 was for the mob, marketing enough that a Cluetrain was created. But the products largely didn’t keep pace with the marketing, as usual. The new tech is just normalizing the old tech, which was normalizing what the mob wanted.

Same as it ever was. Do something different.

Media Tourists

_Media TourismEvery morning I set aside my morning coffee to travel the world through my mobile phone as if it were a spaceship, hermetically sealed. I peer through the window as I travel from point to point I find of interest that morning. One moment I’m checking on my friends in Ukraine inobtrusively, keenly aware that there’s more spin than the media frenzy of a hurricane hitting the United States.

The next moment, I’ll visit friends and family throughout the world as I read their Facebook posts. Then I’ll look at what the talking heads of tech think is important enough to hype, then I’ll deep dive into a few of them to discreetly consider most of it nonsense. I am a weathered traveler of space and time, and the Internet my generation extended around the world has not been wasted on me. I am a tourist of tourists, as are we all. It’s what we humans do.

When people take those all inclusive vacations to resorts, they get to see a sanitized version of the country they are visiting that doesn’t reflect the reality of that country. You’ll hear people talking about when they visited this place or that, and how wonderful this or that was – you’ll rarely hear what was wrong with the country because… well, that would be bad for tourism, and tourism is about selling a dream to people who want to dream, much like politics, but with a more direct impact on revenue that politicians can waste.

Media, and by extension, social media, are much the same. We see what’s ‘in the frame’ of the snapshot we are giving, and that framing makes us draw conclusions about a place. A culture. A religion, or not. An event. A person.
Some of us believe that we’re seeing everything clearly, as in the image at the top of this post. You can look at any point in the picture and see detail, but that’s not how we really see it, and therefore, in our mind, it’s not the way it is. What we see is subject to the ‘red dots’ I wrote of, things looking for our attention directed consciously by someone else (marketing/advertising) and by subconsciously by our own biases.

_Media Tourism perspectiveThe reality of our experiences is usually more like something to the right. Our focus is drawn by red dots and biases, and in the periphery other things are there, poorly defined.  This example is purely visual. And because we generally like what we see, there’s generally a positive emotion with what we see that reinforces wanting to see it again.

This is not new, and it can be good and bad. These days an argument could be made that the red dots of society have run amok.

A group of really smart people with really good intentions created a system that connects human experiences across the planet in a way that is significantly faster than before. Some of our ancestors could not send a message around the world within their lifetime, and here are presently discussing milliseconds to nanoseconds as if we even would notice a millisecond passing ourselves. Our framing was simpler before, we didn’t have nearly as significant a communicating global network back then. Technologies that spread things faster range from the wheel to sailing to flight to the Internet, in broad strokes. As Douglas Adams would write, “Startling advances in twig technology…”

_venn-diagram-41217_960_720However we got here, here we are.

If one group has a blue focus, another purple, another yellow, we get overlaps in framing and the immediate effect has been for everyone to go off in their corners and discuss all that is blue, purple and yellow respectively.

An optimist might say that eventually, the groups will recognize the overlaps in the framing and maybe do a bit better at communicating, but it doesn’t seem like we’re very near that yet. A pessimist will say that it will never happen, and the realist will have to decide the way forward.

I’m of the opinion that it’s our duty as humans to work toward increasing the size of our frames as much as possible so that we have a better understanding of what’s going on within our frame. I don’t know that I’m right, and I don’t know that I’m wrong. If I cited history, the victories would be few that way – there’s always some domination that seems to happen. Personally, I don’t see any really dominant perspective, just a bunch of polarized frames throwing rocks at each other from a distance.

We’ll get so wrapped up in things that we forget sometimes that there’s room for more than one perspective, as difficult as it may be for people to understand. We’ll forget our small knowledge of someone else’s frame does not define their frame, but defines our frame. We forget that we’re just tourists of frames, we visit as long as we wish but do not actually live in a different frame.

Sounds messy? You bet. And all of that mess is being used to train large language models.  Could it homogenize things? Should it? I am fairly certain we’re not ready for that conversation, but like talking about puberty and sex with a teenager… we do seem a bit late on the subject.

I’m just a cybertourist visiting your prison, as you visit mine. Please don’t look under the carpet.

The Societal Mirror.

web_humanity in ai_individualThe article, “Introducing the AI Mirror Test, which very smart people keep failing“, hits some pretty good points on these large language models that we are lazily calling Artificial Intelligence. One of my favorites is this:

…What is important to remember is that chatbots are autocomplete tools. They’re systems trained on huge datasets of human text scraped from the web: on personal blogs, sci-fi short stories, forum discussions, movie reviews, social media diatribes, forgotten poems, antiquated textbooks, endless song lyrics, manifestos, journals, and more besides. These machines analyze this inventive, entertaining, motley aggregate and then try to recreate it. They are undeniably good at it and getting better, but mimicking speech does not make a computer sentient…

As I pointed out in a post on ChatGPT and large language models, such as ‘A Chat With GPT on AI‘, I recognized that it was meeting my cognitive bias. In that regard, I recognized some of myself in what I was getting back, not too different from when I was playing with Eliza in the 1980s with the only difference being that the bot has gotten better because it has access to more information than what the user types in. We were young, we dreamed, but tech wasn’t ready yet.
web_humanity in ai_group
Of course it’s a mirror of what ourselves in that regard – but the author didn’t take it to the next step. As individuals we should be seeing ourselves in the output, but we should also understand that it’s also global society’s mirror as well, and all the relative good and relative bad that comes with it. We have biases in content based on language, on culture, on religion, and on much more. I imagine the Amish don’t care, but still they are part of humanity and we have a blind spot there, I’m certain, never-mind all the history that our society has erased and continues to erase, or has simply ignored.

Personally, I find it a great way to poll the known stores of humanity on what it’s biases believe, no matter how disturbing the results can be. And yet, we’re already likely diluting our own thoughts reflected back at us as marketers and bloggers (not mutually exclusive) churn content out of Large Language Models that they will eventually train on. That’s not something I’m comfortable with, and as usual, my problem isn’t so much technology as society, a rare thing for me to say when so much technology is poorly designed. Am I ‘victim shaming’?

When the victim is the one abusing themself, can it be victim shaming?

Our own echo chambers are rather shameless.

Through A Blurry Looking Glass.

_web_Reviewing Code Frosted GlassI’ve been spending, like so many, an inordinate amount of time considering the future of what we accuse of being artificial intelligence, particularly since I’ve been focusing on my writing and suddenly we have people getting things written for them by ChatGPT. I’ll add that the present quality doesn’t disturb me as much as the reliance on it.

Much of what these artificial intelligences pull from is on the Internet, and if you’ve spent much time on the Internet, you should be worried. It goes a bit beyond that if you think a bit ahead.

Imagine, if you would, artificial intelligences quoting artificial intelligences trained by artificial intelligences. It’s really not that far away and may have already begun as bloggers looking to capitalize on generating content quickly thrash their keyboards to provide prompts to ChatGPT and it’s ilk to create blog posts such that when they market their content it pops up in search engine results. Large language models (of which ChatGPT is one) suddenly think this is great content because what is repeated most makes predictive models say, “Aha! This must be what they mean or want!”.

long email ai

This cartoon at pokes at the problem with “AI Written, AI Read”, but not the recursive issue. Fortunately, on a personal blog I can use it otherwise it would take us down the path of copyright and the stable diffusion lawsuit.

This has the potential to be a scary future. People who look forward a few steps have started an open letter, which of course I signed, asking to Pause Giant AI Experiments. It’s a symbolic thing, I fully expect, because the people who are behind these artificial intelligences aren’t really too interested in much more than their bottom line, with the notable exception of as far as I know.

Meanwhile, so many seem to think that these Large Language Models are a nice replacement for the nerds doing their homework because they don’t have to hang out with them. There’s a reason I bring this up. One of the reasons Artificial Intelligences, Large Language Model or otherwise, are considered creative is because they are basically well read people with mega-gobs of information at their digital synapses. They make it easier to find the intersectionality, a realm that has become more and more important and less and less thought about in our society where specializations require diplomas to do things that… one really doesn’t need a diploma to do.

And it doesn’t know everything. For example, I know a bit about Trinidad and Tobago, so I posed a question to ChatGPT related to the economy of Trinidad and Tobago (response in italics):

What are good exports that Trinidad and Tobago could focus on to help build the economy and better the socioeconomic status of it’s citizens?

Trinidad and Tobago has a diversified economy with a strong focus on the energy sector, particularly oil and gas production. However, there are several other sectors that the country could focus on to diversify its exports and boost its economy. Some potential export industries for Trinidad and Tobago include:

Tourism: Trinidad and Tobago has a rich cultural heritage, beautiful beaches, and unique natural attractions such as the Caroni Swamp and the Asa Wright Nature Centre. The country has the potential to further develop its tourism industry and attract visitors from around the world.

Agriculture: Trinidad and Tobago has a favorable climate for agriculture and could focus on exporting products such as cocoa, coffee, citrus fruits, and vegetables. The country could also explore the potential for organic farming and niche products such as spices and herbs.

Creative industries: Trinidad and Tobago has a vibrant cultural scene and could focus on exporting its music, film, and other creative products. The country has a rich tradition of Carnival, which could be further developed as a major tourist attraction and cultural export.

Information and communication technology (ICT): Trinidad and Tobago has a well-educated and tech-savvy workforce, and the country could focus on developing its ICT sector and exporting software, IT services, and other digital products.

Manufacturing: Trinidad and Tobago could focus on developing its manufacturing industry and exporting products such as chemicals, plastics, and processed foods.

Of course, developing any of these industries would require significant investment and effort, and there may be other factors to consider such as infrastructure, labor costs, and market demand. However, focusing on these industries could help diversify the economy and provide new opportunities for the citizens of Trinidad and Tobago.

Sounds very good, right? Devil is in the details.

Tourism is a definite possibility, but the decades long crime situation (which myself and others believe is because of socioeconomics related to the lack of diversity in the economy), as well as flash flooding and a focus on marketing rather than quality… no. I do like that it mentioned the Asa Wright center, and if anyone actually does come down this way, I can happily point you to other places that you won’t find in the tourist brochures.

Agricultural land has been used by the the government to build housing, so arable land use is decreasing with every the Housing Development Corporation creates, as well as with every agricultural plot of land converted to residential, commercial or industrial depending on who greases the wheels.

Manufacturing would be brilliant. Very little is made in Trinidad and Tobago, but if you’re going to be competing with robots and artificial intelligences in the developed world, we can throw that out.

ICT is my personal favorite, coming from a chatbot that has already got people generating code with it. Seriously, ChatGPT?

Everything ChatGPT has presented has been said more than once in the context of diversifying the economy of Trinidad and Tobago, and it’s a deep topic that most people only understand in a very cursory way. The best way to judge an economy is to observe it over time. In the grand scale of global discourse, the estimated population of 1.5 million people in a dual island nation is not as interesting to the rest of the world as Trinbagonians would like to think it is – like any other nation, most people think it’s the center of the universe – but it’s not a big market, for opportunities young intelligent people leave as soon as they can (brain drain), and what we are left with aspires to mediocrity while hiring friends over competency. A bit harsh, but a fair estimation in my opinion.

How did ChatGPT come up with this? With data it could access, and in that regard since it’s a infinitesimal slice of the global interest, not much content is generated about it other than government press releases by politicians who want to be re-elected so that they can keep their positions, a situation endemic to any democracy that elects politicians, but in Trinidad and Tobago, there are no maximum terms for some reason. A friend sailing through the Caribbean mentioned how hard it was to depart an island in the Caribbean, and I responded with, “Welcome to the Caribbean, where every European colonial bureaucracy has been perpetuated into stagnancy.

The limitations using Trinidad and Tobago as a test case, an outlier of data in the global information database that we call the internet, can be pretty revealing in that there is a bias it doesn’t know about because the data it feeds on is in itself biased, and unlikely to change.

But It’s Not All Bad.
I love the idea that these large language models can help us find the intersectionality between specialties. Much of the decades of my life have been spent doing just that. I read all sorts of things, and much of what I have done in my lifetime has been cross referencing ideas from different specialties that I have read up on. I solved a memory issue in a program on the Microsoft Windows operating system by pondering Costa Rican addresses over lunch one day. Intersectionality is where many things wander off to die these days.

Sir Isaac Newton pulled from intersection. One biography describes him as a multilingual alchemist, whose notes were done in multiple languages which, one must consider, is probably a reflection of his internal dialogue. He didn’t really discover gravity – people knew things fell down well before him, I’m certain – but he was able to pull from various sources and come up with a theory that he could publish, something he became famous for, and something in academia that he was infamous for with respect to the politics of academia.

J.R.R Tolkien, who has recently had a great movie done on his life, was a linguist who was able to pull from many different cultures to put together fiction that has transcended beyond his death. His book, “The Hobbit”, and the later trilogy of “The Lord of the Rings” have inspired various genres of fantasy fiction, board games and much more. 

These two examples show how pulling from multiple cultures and languages, and specialties, are historically significant. Large Language Models are much the same.

Yet there are practical things to consider. Copyrights. Patents. Whether they are legal entities or not. The implicit biases on what they are fed, with the old software engineering ‘GIGO’ (Garbage in, garbage out) coming to mind with the potential for irrevocable recursion of supercharging that garbage and spewing it out to the silly humans who, as we have seen over the last decades, will believe anything. Our technology and marketing of it are well beyond what most people can comprehend.

We are sleeping, and our dreams of electric sheep come with an invisible electric fence with the capacity to thin the herd significantly.


Why Social Media Moderation Fails

Ukrainian Military Tractor Pulling Moscow Parody
A clear parody of a Ukrainian tractor pulling the Moscow.

Moderation of content has become a bit ridiculous on social media sites of late. Given that this post will show up on Facebook, and the image at top will be shown, it’s quite possible that the Facebook algorithms that have run amok with me over similiar things, clear parody, may further restrict my account. I clearly marked the image as a parody.

Let’s see what happens. I imagine they’ll just toss more restrictions on me, which is why Facebook and I aren’t as close as we once were. Anyone who thinks a tractor pulling the sunk Moskva really happened should probably have their head examined, but this is the issue of such algorithms left unchecked. It quite simply is impossible, implausible, and… yes, funny, because Ukrainian tractors have invariably been the heroes of the conflict, even having been blown up when their owners were simply trying to reap their harvests.

But this is not about that.

This is about understanding how social media moderation works, and doesn’t, and why it does, and doesn’t.

What The Hell Do You Know?

Honestly, not that much. As a user, I’ve steered clear of most problems with social networks simply by knowing it’s not a private place where I can do as I please – and even where I can, I have rules of conduct I live by that are generally compatible with the laws of society.

What I do know is that when I was working on the Alert Retrieval Cache way back when, before Twitter, the problem I saw with this disaster communication software was the potential for bad information. Since I couldn’t work on it by myself because of the infrastructural constraints of Trinidad and Tobago (which still defies them for emergency communications), I started working on the other aspects of it, and the core issue was ‘trusted sources’.

Trusted Sources.

To understand this problem, you go to a mechanic for car problems, you go to a doctor for medical problems, and so on. Your mechanic is a trusted source for your car (you would hope). But what if your mechanic specializes in your car, but your friend has a BMW that spends more time in the shop than on the road? He might be a trusted source.

You don’t see a proctologist when you have a problem with your throat, though maybe some people should. And this is where the General Practitioner comes in to basically give you directions on what specialist you should see. With a flourish of a pen in alien handwriting, you are sent off to a trusted source related to your medical issue – we hope.

In a disaster situation, you have on the ground people you have on the ground. You might be lucky enough to have doctors, nurses, EMTs and people with some experience in dealing with a disaster of whatever variety that’s on the table, and so you have to do the best with what you have. For information, some sources will be better than others. For getting things done, again, it depends a lot on the person on the ground.

So the Alert Retrieval Cache I was working on after it’s instantiation was going to have to deal with these very human issues, and the best way to deal with that is with other humans. We’re kind of good at that, and it’s not something that AI is very good at because AI is built by specialists and beyond job skills, most people are generalists.You don’t have to be a plumber to fix a toilet, and you don’t have to be a doctor to put a bandage on someone. What’s more, people can grow beyond their pasts despite an infatuation in Human Resources with the past.

Nobody hires you to do what you did, they hire you to do what they want to do in the future.

So just in a disaster scenario, trusted sources are fluid. In an open system not confined to disasters, open to all manner of cute animal pictures, wars, protests, and even politicians (the worst of the lot in my opinion), trusted sources is a complete crapshoot. This leads everyone to trust nothing, or some to trust everything.

Generally, if it goes with your cognitive bias, you run with it. We’re all guilty of it to some degree. The phrase, “Trust but verify” is important.

In social media networks, ‘fact checking’ became the greatest thing since giving up one’s citizenship before a public offering. So fact checking happens, and for the most part is good – but, when applied to parody, it fails. Why? Because algorithms don’t have a sense of humor. It’s either a fact, or it’s not. And so when I posted the pictures of Ukrainian tanks towing everything, Facebook had a hissy fit, restricted my account and apparently had a field day going through past things I posted that were also parody. It’s stupid, but that’s their platform and they don’t have to defend themselves to me.

Is it annoying? You bet. Particularly since no one knows how their algorithms work. I sincerely doubt that they do. But this is a part of how they moderate content.

In protest, does it make sense to post even more of the same sort of content? Of course not. That would be shooting one’s self in the foot (as I may be doing now when this posts to Facebook), but if you’ve already lost your feet, how much does that matter?

Social media sites fail when they don’t explain their policies. But it gets worse.

Piling on Users.

One thing I’ve seen on Twitter that has me shaking my head, as I mentioned in the more human side of Advocacy and Social Networks, is the ‘Pile On’, where a group of people can get onto a thread and overload someone’s ability to respond to one of their posts. On most networks there is some ‘slow down’ mechanism to avoid that happening, and I imagine Twitter is no different, but that might be just from one specific account. Get enough accounts doing the same thing to the same person, it can get overwhelming from the technical side, and if it’s coordinated – maybe everyone has the same sort of avatar as an example – well, that’s a problem because it’s basically a Distributed Denial of Service on another user.

Now, this could be about all manner of stuff, but the algorithms involved don’t care about how passionate people might feel about a subject. They could easily see commonalities in the ‘attack’ on a user’s post, and even on the user. A group could easily be identified as doing pile ons, and their complaints could be ‘demoted’ on the platform, essentially making it an eyeroll and, “Ahh.These people again.”

It has nothing to do with the content. Should it? I would think it should, but then I would want them to agree with my perspective because if they didn’t, I would say it’s unfair. As Lessig wrote, Code is Law. So there could well be algorithms watching that. Are there? I have no earthly idea, but it’ something I could see easily implemented.

And for being someone who does it, if this happens? It could well cause problems for the very users trying to advocate a position. Traffic lights can be a real pain.

Not All In The Group Are Saints.

If we assume that everyone in our group can do no wrong, we’re idiots. As groups grow larger, the likelihood of getting something wrong increases. As groups grow larger, there’s increased delineation from other groups, there’s a mob mentality and there’s no apology to be had because there’s no real structure to many of these collective groups. When Howard Rheingold wrote about Smart Mobs, I waited for him to write about “Absolutely Stupid Mobs”, but I imagine that book would not have sold that well.

Members of groups can break terms of service. Now, we assume that the account is looked at individually. What happens if they can be loosely grouped? We have the technology for that. Known associates, etc, etc. You might be going through your Twitter home page and find someone you know being attacked by a mob of angry clowns – it’s always angry clowns, no matter how they dress – and jump in, passionately supporting someone who may have well caused the entire situation.

Meanwhile, Twitter, Facebook, all of them simply don’t have the number of people to handle what must be a very large flaming bag of complaints on their doorstep every few microseconds. Overwhelmed, they may just go with what the algorithms say and call it a night so that they can go home before the people in the clown cars create traffic.

We don’t know.

We have Terms of Service for guidelines, but we really don’t know the algorithms these social media sites run to check things out. It has to be at least a hybrid system, if not almost completely automated. I know people on Twitter who are on their third accounts. I just unfollowed one today because I didn’t enjoy the microsecond updates on how much fun they were having jerking the chains of some group that I won’t get into. Why is it their third account? They broke the Terms of Service.

What should you not do on a network? Break the Terms of Service.

But when the terms of service are ambiguous, how much do they really know? What constitutes an ‘offensive’ video? An ‘offensive’ image? An ‘offensive’ word? Dave Chappelle could wax poetic about it, I’m sure, as could Ricky Gervais, but they are comedians – people who show us the humor in an ugly world, when permitted.

Yet, if somehow the group gets known to the platform, and enough members break Terms of Service, could they? Would they? Should they?

We don’t know. And people could be shooting themselves in the foot.

It’s Not Our Platform.

As someone who has developed platforms – not the massive social media platforms we have now, but I’ve done a thing or two here and there – I know that behind the scenes things can get hectic. Bad algorithms happen. Good algorithms can have bad consequences. Bad algorithms can have good consequences. Meanwhile, these larger platforms have stock prices to worry about, shareholders to impress, and if they screw up some things, well, shucks, there’s plenty of people on the platform.

People like to talk about freedom of speech a lot, but that’s not really legitimate when you’re on someone else’s website. They can make it as close as they can, following the rules and laws of many nations or those of a few, but really, underneath it all, their algorithms can cause issues for anyone. They don’t have to explain to you why the picture of your stepmother with her middle finger up was offensive, or why a tractor towing a Russian flag ship needed to be fact checked.

In the end, there’s hopefully a person at the end of the algorithm who could be having a bad day, or could just suck at their job, or could even just not like you because of your picture and name. We. Don’t. Know.

So when dealing with these social networks, bear that in mind.