The Best Way To Avoid Spreading Misinformation

It’s likely at some point we’ve all spread some misinformation involuntarily. It can have dire consequences, too. Washington Post has an article on misinformation but they forgot the most important thing, I think.

Waiting.

‘Trusted sources’ has been a problem that I’ve been studying since we were working on the Alert Retrieval Cache. In an actual emergency, knowing which information you can trust from the ground and elsewhere is paramount. I remember Andy Carvin asking me how Twitter could be used for the same and I shook my head, explaining the problem that no one seemed to want to listen to: The problem is that an open network presents problems with flawed information getting accepted as truth.

Credentialism is a part of the problem. We expect experts to be all-knowing when in fact being an expert itself has no certification. It requires being right before, all the while we want right now and unfortunately the truth doesn’t work that way.

We see a story on social media and we share it, sometimes without thinking, which is why bad news travels faster than good news.1

The easiest way to avoid spreading misinformation is to do something we’re not very good at in a society that pulses like a tachycardic heart: We wait and see what happens. We pause, and if we must pass something along to our social networks, we say we’re not sure it’s real, but since headlines are usually algorithm generated to catch eyes and to spread them like Covid-19, we have to read the stories and check the facts before we share rather than share off the cuff.

Somewhere along the line, the right now trumped being right, and we see it everywhere. By simply following a story before sharing it, you can stop spreading misinformation and stop the virus of misinformation in it’s tracks. Let the story develop. See where it goes. Don’t jump in immediately to write about it when you don’t actually know much about it.

Check news sources for the stories. Wait for confirmation. If it’s important enough to post, point out that it’s unconfirmed.

It’s that simple.

  1. There’s a pun or two in there. ↩︎

Writing: Software

As I’ve mentioned a few times over at RealityFragments, I’ve been working on a book and I’ve been using LibreOffice for it. Why should I pay for a subscription to software to do something so basic? Why should I pay for something when I don’t have to? These are questions that many people who have been indoctrinated into the Microsoft brand don’t seem to ask as often as they should.

However, I seem to have outgrown LibreOffice for the book and did a lot of research on it. There’s a lot to keep track of even with a simple blog post here on KnowProSE.com, much less a science fiction/fantasy novel. I tried a few and I settled on Scrivener because where I have outgrown LibreOffice in this regard, I feel like I can grow into Scrivener.

I bought it last night, and am already finding it a step up from what I’ve been doing. I have been writing long enough now to know what I need to write. Before, this was not the case and LibreOffice worked well not just for writing as I was, but for allowing to find out what best I needed personally. The price of LibreOffice is also impossible to beat, and I still have it for the office suite.

Yet there was a cool thing that I didn’t realize that I needed. It cost me about $20 more, but I spent hours last night organizing some research using Scrapple. I’d best describe it as a Mind Map and a one-user Wiki. It’s actually one of the things I’ve been missing, having tried standalone Wikis to organize things.

Scrapple isn’t necessarily just for writing either. I could see it being used for a variety of things. That you can link documents in a ‘Scrapple’, images inclusive, is awesome for me.

If you are looking for a step up in writing tools, I’d suggest heading over to LiteratureAndLatte and give Scrivener and Scrapple a shot with their free trial. It may not be for everyone, but having spent hours last night with it, it’s worth a look.

If I’m up to it, I might do a deep dive review. I think that won’t happen because writing, as I know it, is a personal process and everyone is different. The tools we use have the commonalities we agree upon, much like language itself – we build our own little worlds.

NYT Says No To Bots.

The content for training large language models and other AIs has been something I have written about before, with being able to opt out of being crawled by AI bots. The New York Times has updated it’s Terms and Conditions to disallow that – which I’ll get back to in a moment.

It’s an imperfect solution for so many reasons, and as I wrote before when writing about opting out of AI bots, it seems backwards.

In my opinion, they should allow people to opt in rather than this nonsense of having to go through motions to protect one’s content from being used as a part of a training model.

Back to the New York Times.

…The New York Times updated its terms of services Aug. 3 to forbid the scraping of its content to train a machine learning or AI system.

The content includes but is not limited to text, photographs, images, illustrations, designs, audio clips, video clips, “look and feel” and metadata, including the party credited as the provider of such content.

The updated TOS also prohibits website crawlers, which let pages get indexed for search results, from using content to train LLMs or AI systems…

The New York Times Updates Terms of Service to Prevent AI Scraping Its Content“, Trishla Ostwal, Adweek.com, August 10th 2023.

This article was then referenced by The Verge, which added a little more value.

…The move could be in response to a recent update to Google’s privacy policy that discloses the search giant may collect public data from the web to train its various AI services, such as Bard or Cloud AI. Many large language models powering popular AI services like OpenAI’s ChatGPT are trained on vast datasets that could contain copyrighted or otherwise protected materials scraped from the web without the original creator’s permission…

The New York Times prohibits using its content to train AI models“, Jess Weatherbed, TheVerge.com, Augus 14th, 2023.

That’s pretty interesting considering that Google and the New York Times updated their agreement on News and Innovation on February 6th, 2023.

This all falls into a greater context where many media organizations called for rules protecting copyright in data used to train generative AI models in a letter you can see here.

Where does that leave us little folk? Strategically, bloggers have been a thorn in the side of the media for a few decades, driving down costs for sometimes pretty good content. Blogging is the grey area of the media, and no one really seems to want to tackle that.

I should ask WordPress.com what their stance is. People on Medium and Substack should also ask for a stance on that.

Speaking for myself – if you want to use my content for your training model so that you can charge money for a service, hit me in the wallet – or hit the road.

Understanding Followers In An Age of Social Media.

In the beginning, there were blogs, and they were good. Mostly.

Then came social media, which allowed everyone a blog, per se, and a common area to view everyone else’s ‘blogs’. In fact, most social media websites are just that – microblogging with better readership by the platform, through the platform, and for the platform.

Blogs had followers, or readers – as we have here on WordPress.com, which seems to have a confused identity in trying to be a site like Facebook or Twitter while not being Facebook or Twitter. Can I say Twitter still? Will this make Elon Musk angry? More importantly, do I care? Yes, dunno, no.

In travels, I have met people from all over the world. We shared parts of our lives, and these were generally good in some way. We’ve all moved on, as we should have, from the times when we were connected. This is the point I wanted to riff on with what Renard was blogging about.

More and more often I find myself telling others, even writing it, that although people may be going in the same direction, they may have a different destination. In the age of social media, followers will come and go. If you’re writing focally, people may grow beyond the interest. If you write personally, something you may be going through may riff with what’s going on in someone else’s life.

We all move on, or should. It’s healthy.

When people like every post I make, it’s nice for a while but then it gets a little worrisome. Maybe it tells me they’re not actually reading but just going through an engagement checklist someone wrote in “Blogging for Dummies”. It’s highly unlikely someone would like all those posts and not have anything to write as a comment.

Just remember, people do move on. It’s when they’re reading what you wrote that matters. They don’t have to come over for dinner, and they don’t have to have a blood oath. If they got what they enjoyed, wanted or needed (hopefully all 3), you’ve done something of worth.

NatGeo Lays Off More Writers. :(

In probably the saddest news for me this week, National Geographic has layed off the last remaining staff writers.

It was a matter of time, I suppose, with the Internet shaking things up for better and worse, and with National Geographic being a business – but how many of us have really considered National Geographic a business? In many ways, it is akin to the BBC in showing us our world, so much so that I hope that their lawyers don’t get upset over me using one of their most famous covers as part of this post.

I’ll take it down if you want to, NatGeo Legal Department, but I’d rather keep it alive as a memory of a wonderful magazine that enriched so many of our lives before the Internet.

It seems I would worry about such a thing from NatGeo, but since they are majority owned right now by Disney, Defender of Mice, Cheese Rights and Copyrights, I just want to be on an even keel.

The cutback — the latest in a series under owner Walt Disney Co. — involves some 19 editorial staffers in all, who were notified in April that these terminations were coming. Article assignments will henceforth be contracted out to freelancers or pieced together by editors. The cuts also eliminated the magazine’s small audio department.

The layoffs were the second over the past nine months, and the fourth since a series of ownership changes began in 2015. In September, Disney removed six top editors in an extraordinary reorganization of the magazine’s editorial operations.

Departing staffers said Wednesday the magazine has curtailed photo contracts that enabled photographers to spend months in the field producing the publication’s iconic images.

In a further cost-cutting move, copies of the famous bright-yellow-bordered print publication will no longer be sold on newsstands in the United States starting next year, the company said in an internal announcement last month.

National Geographic lays off its last remaining staff writers, Washington Post, Paul Farhi, June 28, 2023

It’s interesting that WaPo didn’t paywall that article, which they’ve been pretty annoyingly good at. Bezos needs to get to space, we know.

But wait, there’s more.

“Staffing changes will not change our ability to do this work, but rather give us more flexibility to tell different stories and meet our audiences where they are across our many platforms,” the spokesperson said. “Any insinuation that the recent changes will negatively impact the magazine, or the quality of our storytelling, is simply incorrect.”

The full-time staff will be replaced by a roster of freelance writers, save for certain digital content that will be written by in-house editors, the former staffer said. National Geographic currently employs only two designated text editors, a group of so-called multi-platform editors who handle both print and digital, and a group of digital-only editors, the former staffer said.

National Geographic magazine has laid off the last of its staff writers, CNN, Liam Reilly, June 29th 2023

It’s worth noting that the Washington Post had only one paragraph on what CNN expanded on – and that may be appropriate because it seems to be the Company Line.

It wasn’t until I got to Quartz that I started to see things a little differently.

Disney CEO Bob Iger announced a $5.5 billion plan to cut costs across the company in February. The entertainment goliath has since fired 7,000 employees in multiple rounds of layoffs. One of Iger’s priorities is to turn around struggling streaming service Disney+.

“Instead of chasing (subscribers) with aggressive marketing and aggressive spend on content, we have to start chasing profitability,” said Iger at a Disney all-hands meeting in November, as Reuters reported.

NatGeo, which Disney bought from 21st Century Fox in 2019, has been just one brand hit hard by Iger’s cost savings plan. In September, six senior editors at the publication were also terminated.

National Geographic will soon disappear from newsstand shelves, Quartz.com, Julia Malleck, June 30th, 2023

I mean, c’mon.

This is where the term ratf*cked came from, maybe. Is that a term? Maybe it should be.

Rolling The Text Dice.

I haven’t published any science fiction other than some things here and there on my blogs, so I wouldn’t say that I’m an expert on writing Science Fiction – but I most certainly would consider myself an expert on reading Science Fiction. And it’s Sunday, and this is what I feel like writing about today.

Presently, there’s an inflection in technology where technology can write like a science fiction author (poorly), however banal one might think it is. There’s a recurring theme right now about people worried about creative endeavors being taken over, but as far as I know nothing very impressive has come out in this regard and I don’t expect it anytime soon.

…There’s one barrier that AI can’t cross, as of now. And that’s creating new science fictional concepts. The writer who can take in everything that’s going on now and speculate about the near future in any coherent way has the edge – for the moment. AIs based on current training models are essentially limited to rearranging the deck chairs on the Carnival Cruise’s Mardi Gras. What’s disappointing, is that’s exactly what most current human science fiction writers are doing too…

The Future of Science Fiction“, James Wallace Harris, 4/2/23

I fully endorse what he writes there. Most human science fiction writers these days seem to have Ye Olde CookeBooke of Formulaic Books which has been greatly disappointing when I explored bookstores, running my fingers gently across a creative cover of a book to flip it open, glance within, and see… the same formulae.

I’m not saying there isn’t good original science fiction out there. It’s just gotten so hard to find for me. The dwindling number of bookstores no longer seem to hold that many new ideas and thoughts, and Amazon, while convenient, lacks the tactile experience and probably has adversely affected any romantic relationships that blossomed over book choices in line. Don’t worry, I’m sure they’ll eventually create a dating app based on books you’ve purchased through Amazon, regardless of whether you read or understood them.

“We read, frequently if not unknowingly, in search of a mind more original than our own.”

Harold Bloom, How To Read and Why, 2000.

Harold Bloom’s quote has haunted me for at least a decade now, if not longer, because every time I go into a bookstore and see the same depressing stuff on the shelf that seems like corporate pulp fiction, I think that maybe it’s time I write something better. Something not formulaic.

Something that hits like Robert Heinlein’s Stranger in a Strange Land, an oddly poetic work considering how conservative the author was to create such an interestingly liberal book.

I’m starting a new project tomorrow, holding myself to landmarks and deadlines, so I’ve been roving around the Internet and reading up on things like these. The reason I’m writing about it here is because in it’s own way, it’s a system as hinted at here:

…Right now we have more science fiction of all kinds being produced than ever before. That production is going into overdrive with AI. And the percentage of it that’s readable and entertaining is increasing. That’s also part of the problem – there’s too much science fiction – at least for any one work to become widely known. Good science fiction novels have almost become a generic product line, in white boxes with black letter labeling…

The Future of Science Fiction“, James Wallace Harris, 4/2/23

Anything sustainable in the world of our creation has to be fiscally sustainable. This is what happened with Web 2.0, this is what has happened with publishing in general, and so, it’s happened to the just about everything else.


Saying that there is a lack of originality in Science Fiction is very much like saying that there’s no real originality in modern music. It’s subjective.

As originality decreases, banality increases.

Now here’s the rub. Since bayesian probability, the core of just about every algorithm out there offering you suggestions on everything from Netflix to Facebook, targets the median, the median decides what is original or not.

As the average person experiences more original works, they increasingly see suggestions as banal. Bayesian probability only works based on what has happened.

Beating that system for marketing requires a mix of what people have liked with what is original, all of which are moving targets on a very small asteroid to be hit from a spaceship with really bad targeting.

Roll the dice, I suppose. I’d rather we had an infinite improbability machine.

AI: Standing on the Shoulders of Technology, Seeking Humanity

“When the mob governs, man is ruled by ignorance; when the church governs, he is ruled by superstition; and when the state governs, he is ruled by fear. Before men can live together in harmony and understanding, ignorance must be transmuted into wisdom, superstition into an illumined faith, and fear into love.”

Manly P. Hall, The Secret Teachings of All Ages.

It’s almost impossible to keep up with all that is going on related to discussion on what’s being marketed as artificial intelligence, particularly with a lot of speculation on how it will impact our lives.

Since the late 1970s, we evolved technology from computers to personal computers to things we carry around that we still call ‘phones’ although their main purposes do not seem to revolve around voice contact. In that time, we’ve gone from having technology on a pedestal that few could reach to a pedestal most of humanity can reach.

It has been a strange journey so far. If we measure our progress by technology, we’ve been successful. That’s a lot like measuring your left foot with your right foot, though, assuming you are equally equipped. If we measure success fiscally and look at the economics of the world, a few people have gotten fairly rich at the expense of a lot of people. If we measure it in knowledge access, more people have access to knowledge than any other time on the planet – but it comes with a severe downside of a lot of misinformation out there.

We don’t really have a good measure of the impact of technology in our lives because we don’t seem to think that’s important outside of technology, yet we have had this odd tendency in my lifetime to measure progress with technology. At the end of my last session with my psychologist, she was talking about trying to go paperless in her office. She is not alone.

It’s 2023. Paperless offices was one of the technological promises made in the late 1980s. That spans about 3 decades. In that same period, it seems that the mob has increasingly governed, superstition has governed the mob, and the states have increasingly tried to govern. It seems as a whole, despite advances in science and technology, we, the mob, have become more ignorant, more superstitious and more fearful. What’s worse, our attention spans seem to have dropped to 47 seconds. Based on that, many people have already stopped reading because of ‘TLDR’.

Into all of this, we now have artificial intelligence to contend with:

…Some of the greatest minds in the field, such as Geoffrey Hinton, are speaking out against AI developments and calling for a pause in AI research. Earlier this week, Hinton left his AI work at Google, declaring that he was worried about misinformation, mass unemployment and future risks of a more destructive nature. Anecdotally, I know from talking to people working on the frontiers of AI, many other researchers are worried too…

HT Tech, “AI Experts Aren’t Always Right About AI

Counter to all of this, we have a human population that clearly are better at multiplying than math. Most people around the world are caught up in their day to day lives, working toward some form of success even as we are inundated with marketing, biased opinions parading around as news, all through the same way we are now connected to the world.

In fact, it’s the price we pay, it’s the best price Web 2.0 could negotiate, and if we are honest we will acknowledge that at best it is less than perfect. The price we pay for it is deeper than the cost we originally thought and may even think now. We’re still paying it and we’re not quite sure what we bought.

“We are stuck with technology when what we really want is just stuff that works.”

Douglas Adams, The Salmon of Doubt.

In the late 1980s, boosts in productivity were sold to the public as ‘having more time for the things you love’ and variations on that theme, but that isn’t really what happened. Boosts in productivity came with the focus in corporations so that the more you did, the more you had to do. Speaking for myself, everyone hired for 40 hour work weeks but demanded closer to 50. Sometimes more.

Technology marketing hasn’t been that great at keeping promises. I write that as someone who survived as a software engineer with various companies over the decades. Like so many things in life, the minutiae multiplied.

“…Generative AI will end poverty, they tell us. It will cure all disease. It will solve climate change. It will make our jobs more meaningful and exciting. It will unleash lives of leisure and contemplation, helping us reclaim the humanity we have lost to late capitalist mechanization. It will end loneliness. It will make our governments rational and responsive. These, I fear, are the real AI hallucinations and we have all been hearing them on a loop ever since Chat GPT launched at the end of last year…”

Naomi Klein, “AI Machines Aren’t ‘Hallucinating’. But Their Makers Are

There was a time when a software engineer had to go from collecting requirements to analysis to design to coding to testing to quality assurance to implementation. Now these are all done by teams. They may well all be done by versions of artificial intelligence in the future, but anyone who has dealt with clients first hand will tell you that clients are not that great at giving requirements, and that has been roled into development processes in various ways.

Then there is the media aspect, where we are all media tourists that are picking our social media adventures, creating our own narratives from what social media algorithms pick for us. In a lot of ways, we have an illusion of choice when what we really get are things that algorithms decide we want to see. That silent bias also includes content paywalled into oblivion, nevermind all that linguistic bias where we’re still discovering new biases.

Large Language Models like ChatGPT, called artificial intelligences with a degree of accuracy, have access to information that may or may not be the same that we may have in our virtual caves. They ‘learn’ faster, communicate faster and perhaps more effectively, but they lack one thing that would make them fail a real Turing test: Being human.

This is not to say that they cannot fake it convincingly by using Bayesian probability to stew our biases into something we want to read. We shouldn’t be too surprised, we put stuff in, we get stuff out, and the stuff we get out will look amazingly like stuff we put in. It is a step above a refrigerator in that we put in ingredients and we get cooked meals out, but just because a meal tastes good doesn’t mean it’s nutritious.

“We’re always searching, but now we have the illusion we’re finding it.”

Dylan Moran, “Dylan Moran on sobriety, his childhood, and the internet | The Weekly | ABC TV + iview

These stabs at humanity with technology are becoming increasingly impressive. Yet they are stabs, and potentially all that goes with stabs. The world limited to artificial intelligences can only make progress within the parameters and information that we give to them. They are limited, and they are as limited as we are, globally, biases and all. No real innovation happens beyond those parameters and information. It does not create new knowledge, it simply dresses up old knowledge in palatable ways very quickly, but what is palatable now may not be so next year. Or next month.

If we were dependent on artificial intelligences in the last century, we may not have had many of the discoveries we made. The key word, of course, is dependent. On the other hand, if we understood it’s limitations and incentivized humanity to add to this borgish collective of information, we may have made technological and scientific progress faster, but… would we have been able to keep up with it economically? Personally?

We’re there now. We’re busy listening to a lot of billionaires talk about artificial intelligences as if billionaires are vested in humanity. They’re not. We all know they’re not, some of us pretend they are. Their world view is very different. This does not mean that it’s wrong, but if we’re going to codify an artificial intelligence with opinions somehow, it seems we need more than billionaires and ‘experts’ in such conversations. I don’t know what the solution is, but I’m in good company.

The present systems we have are biased. It’s the nature of any system, and the first role of a sustainable system is making sure it can sustain itself. There are complicated issues related to intellectual property that can diminish new information being added to the pool balanced with economic systems that, in my opinion, should also be creating the possibility of a livelihood for those who do create and innovate not just in science and technology, but advance humanity in other ways.

I’m not sure what the answers are. I’m not even sure what the right questions are. I’m fairly certain the present large language models don’t have them because we have not had good questions and answers yet to problems affecting us as a species.

I do know that’s a conversation we should be having.

What do you think?

The Complicated Publishing Issue.

_libraryMost of the people around me are completely unaware of the Internet Archive being successfully sued over sharing electronic books by some publishers:

…In July 2020, immediately after the Covid lockdown, four publishers – Hachette, HarperCollins, Wiley and Penguin Random House – decided to bring a major lawsuit against the Internet Archive, claiming it had ‘infringed their copyright’, potentially cost their companies millions of dollars and was a threat to their businesses. Last month the New York court found – predictably – in the publishers’ favour, rejecting the IA’s defence of ‘fair use’, and ruling that ‘although IA has the right to lend print books it lawfully acquired, it does not have the right to scan those books and lend the digital copies en masse.’…
The article goes on quite a bit exploring it in what seems to me to be a fairly comprehensive and balanced way. Even so, I looked around through the news about it and found a few other things.

There’s the Author’s Guild’s celebration of the success. That seems a bit more damning because the author’s aren’t the publishers, and they raise some valid points.

The Internet Archive’s own post on the matter brought up the public good:

Today’s lower court decision in Hachette v. Internet Archive is a blow to all libraries and the communities we serve. This decision impacts libraries across the US who rely on controlled digital lending to connect their patrons with books online. It hurts authors by saying that unfair licensing models are the only way their books can be read online. And it holds back access to information in the digital age, harming all readers, everywhere…

Having read all of this, I find that there are good points on either side. As far as the legalities of the specifics of the case, I am not a lawyer and do not pretend to be one on the Internet, so I can’t comment on that. I can say that as someone who reads a lot, even though I have gone back to paper books for the most part, these publishing models seem antiquated and have not allowed much room for the rights of people to access information, be it a romance novel or scientific papers. The big wheels have turned too slow on this.

I think the best article I read on the topic, the lawsuit regarding fair use, was by Marketplace:

…“The publishers believe that digital lending should essentially be a right that they license to libraries and that every time a library wants to loan something to a reader, the publishers should get paid a licensing fee,” Sinnreich told Marketplace.  

But licensing models can be burdensome for institutions that are largely underfunded. 

Public libraries use different licensing models, but the most common is the two-year license, explained Alan Inouye, leader of the American Library Association’s public policy and advocacy office…

…Librarians have chronicled journal price changes over the years, finding that some titles could cost between about $50 and $220 in the 1980s. Now, those same titles range between about $18,900 and $40,300. 

Inouye said he thinks both libraries and individuals have fewer rights in our digital environment…

There was a time that present generations may not remember where we lent friends books that we had. Given it was one physical copy, we could only share it once, and the same was true of libraries. If a book you wanted to read was checked out, you couldn’t get to it until it was physically returned. If a library had paid for more than one book, it could lend more than one book because of the physical limitations.

Now, with electronic books, it’s possible to share things a lot easier, but the intent of publishers is not for the books to be shared. The intent of public libraries is to share information for the public good. The intent of readers varies, but in the broad strokes it’s access to information, sometimes permanently (buying the book) and sometimes temporarily (borrowing the book from a library). The balance of all of this is at issue and has been for some time, and let’s be honest: Publishers have been making their own rules and lobbying their own legislation for some time. You can read about this in Lawrence Lessig’s “Free Culture”, which you can legally download as a PDF from the Library of Congress.

All of this is a centuries long negotiation between people and those that publish. Oddly, it has little to do with the content creators themselves other than the fact that they are beholden to publishers to publish their works… in an era when self-publishing is possible. In return, they get help producing, marketing and protecting those works.

And now, things are actually becoming more complicated with large language models.

Silent Bias

_web_studying ourselvesOnce upon a time as a Navy Corpsman in the former Naval Hospital in Orlando, we lost a patient for a period – we simply couldn’t find them. There was a search of the entire hospital. We eventually did find her but it wasn’t by brute force. It was by recognizing what she had come in for and guessing that she was on LSD. She was in a ladies room, staring into the mirror, studying herself through a sensory filter that she found mesmerizing. What she saw was something only she knows, but it’s safe to say it was a version of herself, distorted in a way only she would be able to explain.

I bring this up because as a species, many of us connected to our artificial nervous system are fiddling with ChatGPT, and what we are seeing are versions of our society in a mirror.

As readers, what we get out of it has a lot of what we bring to it. As we query it, we also get out of it what we ask of it through the filters of how it was trained and it’s algorithms, the reflexes we give it. Is it sentient? Of course not, these are just large language models and are not artificial general intelligences.

With social media companies, we have seen the effect of the social media echo chambers as groups become more and more isolated despite being more and more connected, aggregating to make it easier to sell advertising to. This is not to demonize them, many bloggers were doing it before them, and before bloggers there was the media, and before then as well. It might be amusing if we found out that cave paintings were actually advertising for someone’s spears or some hunting consulting service, or it might be depressing.

All of this cycled through my mind yesterday as I began considering the role of language itself with it’s inherent bias based on an article that stretched it to large language models and artificial intelligence. The actual study was just about English and showed a bias toward addition, but with ChatGPT and other large language models being the current advertising tropism, it’s easy to understand the intention of linking the two in an article.

Regardless of intention, there is a point as we stare into the societal mirror of large language models. The training data will vary, languages and cultures vary, and it’s not hard to imagine that every language, and every dialect, has some form of bias. It might be a good guess that where you see a lot of bureaucracy, there is linguistic bias and that can get into a chicken and egg conversation: Did the bias exist before the language, or did the language create the bias? Regardless, it can reinforce it.

fake hero dogThen I came across this humorous meme. It ends up being a legitimate thing that happened. The dog was rewarded with a steak for saving the life of a child from drowning and quickly came to the conclusion that pulling children out of water got it steak.

Apparently not enough children were falling into water for it to get steaks, so it helped things along. It happened in 1908, and Dr. Pavlov was still alive during this. His famous derived work with dogs was published in 1897, about 11 years prior, but given how slow news traveled then it wasn’t as common knowledge as we who have internet access would expect. It’s possible the New York Times article mentioned him, but I didn’t feel like unlocking their paywall.

If we take this back to society, we have seen the tyranny of fake news propagation. That’s nothing new either. What is interesting is the paywall aspect, where credible news is hidden behind paywalls leaving the majority of the planet to read what is available for free. This is a product of publishing adaptation to the Internet age, which I lived through and which to an extent I gained some insight into when I worked for Linux Journal’s parent company, SSC. The path from print to internet remains a very questionable area because of how advertising differs between the two media.

Are large language models being trained on paywalled information as well? Do they have access to academic papers that are paywalled? What do they have access to?

What parts of ourselves are we seeing through these mirrors? Then we have to ask whether the large language models have access to things that most humans don’t, and based on who is involved, it’s not hard to come to a conclusion where the data being fed to them by these companies isn’t available for consumption for the average person. Whether that is true or not is up for debate.

All of this is important to consider as we deal with these large language models, yet the average person plays with them as a novelty, unaware of the biases. How much should we trust what comes out of them?

As far as disruptive technologies go, this is probably the largest we have seen since the Internet itself. As long as it gives people what they want, and it supports cognitive biases, it’s less likely to be questioned. Completely false articles propagate on the Internet still, there are groups of people who seriously believe that the Earth is flat, and we have people asking ChatGPT things that they believe are important. I even saw someone in a Facebook reel quoting a GPT-4 answer.

We should at the least be concerned, but overall we aren’t. We’re too busy dealing with other things, chasing red dots.

Through A Blurry Looking Glass.

_web_Reviewing Code Frosted GlassI’ve been spending, like so many, an inordinate amount of time considering the future of what we accuse of being artificial intelligence, particularly since I’ve been focusing on my writing and suddenly we have people getting things written for them by ChatGPT. I’ll add that the present quality doesn’t disturb me as much as the reliance on it.

Much of what these artificial intelligences pull from is on the Internet, and if you’ve spent much time on the Internet, you should be worried. It goes a bit beyond that if you think a bit ahead.

Imagine, if you would, artificial intelligences quoting artificial intelligences trained by artificial intelligences. It’s really not that far away and may have already begun as bloggers looking to capitalize on generating content quickly thrash their keyboards to provide prompts to ChatGPT and it’s ilk to create blog posts such that when they market their content it pops up in search engine results. Large language models (of which ChatGPT is one) suddenly think this is great content because what is repeated most makes predictive models say, “Aha! This must be what they mean or want!”.

long email ai

This cartoon at Marketoonist.com pokes at the problem with “AI Written, AI Read”, but not the recursive issue. Fortunately, on a personal blog I can use it otherwise it would take us down the path of copyright and the stable diffusion lawsuit.

This has the potential to be a scary future. People who look forward a few steps have started an open letter, which of course I signed, asking to Pause Giant AI Experiments. It’s a symbolic thing, I fully expect, because the people who are behind these artificial intelligences aren’t really too interested in much more than their bottom line, with the notable exception of DeepAI.org as far as I know.

Meanwhile, so many seem to think that these Large Language Models are a nice replacement for the nerds doing their homework because they don’t have to hang out with them. There’s a reason I bring this up. One of the reasons Artificial Intelligences, Large Language Model or otherwise, are considered creative is because they are basically well read people with mega-gobs of information at their digital synapses. They make it easier to find the intersectionality, a realm that has become more and more important and less and less thought about in our society where specializations require diplomas to do things that… one really doesn’t need a diploma to do.

And it doesn’t know everything. For example, I know a bit about Trinidad and Tobago, so I posed a question to ChatGPT related to the economy of Trinidad and Tobago (response in italics):

What are good exports that Trinidad and Tobago could focus on to help build the economy and better the socioeconomic status of it’s citizens?

Trinidad and Tobago has a diversified economy with a strong focus on the energy sector, particularly oil and gas production. However, there are several other sectors that the country could focus on to diversify its exports and boost its economy. Some potential export industries for Trinidad and Tobago include:

Tourism: Trinidad and Tobago has a rich cultural heritage, beautiful beaches, and unique natural attractions such as the Caroni Swamp and the Asa Wright Nature Centre. The country has the potential to further develop its tourism industry and attract visitors from around the world.

Agriculture: Trinidad and Tobago has a favorable climate for agriculture and could focus on exporting products such as cocoa, coffee, citrus fruits, and vegetables. The country could also explore the potential for organic farming and niche products such as spices and herbs.

Creative industries: Trinidad and Tobago has a vibrant cultural scene and could focus on exporting its music, film, and other creative products. The country has a rich tradition of Carnival, which could be further developed as a major tourist attraction and cultural export.

Information and communication technology (ICT): Trinidad and Tobago has a well-educated and tech-savvy workforce, and the country could focus on developing its ICT sector and exporting software, IT services, and other digital products.

Manufacturing: Trinidad and Tobago could focus on developing its manufacturing industry and exporting products such as chemicals, plastics, and processed foods.

Of course, developing any of these industries would require significant investment and effort, and there may be other factors to consider such as infrastructure, labor costs, and market demand. However, focusing on these industries could help diversify the economy and provide new opportunities for the citizens of Trinidad and Tobago.

Sounds very good, right? Devil is in the details.

Tourism is a definite possibility, but the decades long crime situation (which myself and others believe is because of socioeconomics related to the lack of diversity in the economy), as well as flash flooding and a focus on marketing rather than quality… no. I do like that it mentioned the Asa Wright center, and if anyone actually does come down this way, I can happily point you to other places that you won’t find in the tourist brochures.

Agricultural land has been used by the the government to build housing, so arable land use is decreasing with every the Housing Development Corporation creates, as well as with every agricultural plot of land converted to residential, commercial or industrial depending on who greases the wheels.

Manufacturing would be brilliant. Very little is made in Trinidad and Tobago, but if you’re going to be competing with robots and artificial intelligences in the developed world, we can throw that out.

ICT is my personal favorite, coming from a chatbot that has already got people generating code with it. Seriously, ChatGPT?

Everything ChatGPT has presented has been said more than once in the context of diversifying the economy of Trinidad and Tobago, and it’s a deep topic that most people only understand in a very cursory way. The best way to judge an economy is to observe it over time. In the grand scale of global discourse, the estimated population of 1.5 million people in a dual island nation is not as interesting to the rest of the world as Trinbagonians would like to think it is – like any other nation, most people think it’s the center of the universe – but it’s not a big market, for opportunities young intelligent people leave as soon as they can (brain drain), and what we are left with aspires to mediocrity while hiring friends over competency. A bit harsh, but a fair estimation in my opinion.

How did ChatGPT come up with this? With data it could access, and in that regard since it’s a infinitesimal slice of the global interest, not much content is generated about it other than government press releases by politicians who want to be re-elected so that they can keep their positions, a situation endemic to any democracy that elects politicians, but in Trinidad and Tobago, there are no maximum terms for some reason. A friend sailing through the Caribbean mentioned how hard it was to depart an island in the Caribbean, and I responded with, “Welcome to the Caribbean, where every European colonial bureaucracy has been perpetuated into stagnancy.

The limitations using Trinidad and Tobago as a test case, an outlier of data in the global information database that we call the internet, can be pretty revealing in that there is a bias it doesn’t know about because the data it feeds on is in itself biased, and unlikely to change.

But It’s Not All Bad.
I love the idea that these large language models can help us find the intersectionality between specialties. Much of the decades of my life have been spent doing just that. I read all sorts of things, and much of what I have done in my lifetime has been cross referencing ideas from different specialties that I have read up on. I solved a memory issue in a program on the Microsoft Windows operating system by pondering Costa Rican addresses over lunch one day. Intersectionality is where many things wander off to die these days.

Sir Isaac Newton pulled from intersection. One biography describes him as a multilingual alchemist, whose notes were done in multiple languages which, one must consider, is probably a reflection of his internal dialogue. He didn’t really discover gravity – people knew things fell down well before him, I’m certain – but he was able to pull from various sources and come up with a theory that he could publish, something he became famous for, and something in academia that he was infamous for with respect to the politics of academia.

J.R.R Tolkien, who has recently had a great movie done on his life, was a linguist who was able to pull from many different cultures to put together fiction that has transcended beyond his death. His book, “The Hobbit”, and the later trilogy of “The Lord of the Rings” have inspired various genres of fantasy fiction, board games and much more. 

These two examples show how pulling from multiple cultures and languages, and specialties, are historically significant. Large Language Models are much the same.

Yet there are practical things to consider. Copyrights. Patents. Whether they are legal entities or not. The implicit biases on what they are fed, with the old software engineering ‘GIGO’ (Garbage in, garbage out) coming to mind with the potential for irrevocable recursion of supercharging that garbage and spewing it out to the silly humans who, as we have seen over the last decades, will believe anything. Our technology and marketing of it are well beyond what most people can comprehend.

We are sleeping, and our dreams of electric sheep come with an invisible electric fence with the capacity to thin the herd significantly.