Introducing Sequacious AI

Sequacious AI will answer all of your questions based on what it has scraped from the Internet! It will generate images based on everything it sucked into it’s learning model manifold! It will change the way you do business! It will solve the world’s mysteries for you by regurgitating other people’s content persuasively!

You’ll beat your competitors who aren’t using it at just about anything!

Sequacious is 3.7 Megafloopadons1 above the industry standard in speed!

Terms and conditions may apply.2

Is this a new product? A new service?

Nope. It’s What You Have Already, it’s just named descriptively.

It’s a descriptor for what you already are getting, with an AI generated image that makes you feel comfortable with it combined with text that preys on anxieties related to competition, akin to nuclear weapons. It abuses exclamation marks.

The key is the word, “Sequacious“. Intellectually servile, devoid of independent or original thought. It simply connects words in answers based on what it is fed and how it’s programmed. That’s why the Internet is being mined for data, initially ignoring copyright and now maybe paying lip service to it, while even one’s actions on social media are being fought for at the national level.

And it really isn’t that smart. Consider the rendering of the DIKW pyramid by DALL-E. To those who don’t know anything about the DIKW pyramid, they might think it’s right (which is why I made sure to put on the image that it’s wrong).

Ignore the obvious typos DALL-E made.

It’s inverted. You’d think that an AI might get information science right. It takes a lot of data to make information, a lot of information to make knowledge, and a lot of knowledge to hopefully make wisdom.

Wisdom should be at the top – that would be wise3.

A more accurate representation of a DIKW pyramid, done to demonstrate (poorly) how much is needed to ascend each level.

Wisdom would also be that while the generative AIs we have are sequacious, or intellectually servile, we assume that it’s servile to each one of us. Because we are special, each one of us. We love that with a few keystrokes the word-puppetry will give us what we need, but that’s the illusion. It doesn’t really serve us.

It serves the people who are making money, or who want to know how to influence us. It’s servile to those who own them, by design – because that’s what we would do too. Sure, we get answers, we get images, and we get videos – but even our questions tell the AIs more about us than we may realize.

On Mastodon, I was discussing something a few days ago and they made the point that some company – I forget who, I think it was Google – anonymizes data, and that’s a fair point.

How many times have you pictured someone in your head and not known their name? Anonymized data can be like that. It’s descriptive enough to identify someone. In 2016, Google’s AI could tell exactly where an image was taken. Humans might be a little bit harder. It’s 2024 now, though.

While our own species wrestles it’s way to wisdom, don’t confuse data with information, information with knowledge, and knowledge with wisdom in this information age.

That would make you sequacious.

  1. Megafloopadons is not a thing, but let’s see if that makes it into a document somewhere. ↩︎
  2. This will have a lot of words that pretty much make it all a Faustian bargain, with every pseudo-victory being potentially Pyrrhic. ↩︎
  3. It’s interesting to consider that the inversion might be to avoid breaking someone’s copyright, and iit makes one wonder if that isn’t hard coded in somewhere. ↩︎

Critical Thinking In The Age Of AI.

Critical thinking is the ability to suspend judgement, and to consider evidence, observations and perspectives in order to form a judgement, requiring rational, skeptical and unbiased analysis and evaluation.

It’s can be difficult, particularly being unbiased, rational and skeptical in a world that seems to require responses from us increasingly quickly.

Joe Árvai, a psychologist who has done research on decision making, recently wrote an article about critical thinking and artificial intelligence.

“…my own research as a psychologist who studies how people make decisions leads me to believe that all these risks are overshadowed by an even more corrupting, though largely invisible, threat. That is, AI is mere keystrokes away from making people even less disciplined and skilled when it comes to thoughtful decisions.”

The hidden risk of letting AI decide – losing the skills to choose for ourselves‘”, Joe Árvai, TheConversation, April 12, 2024

It’s a good article, well worth the read, and it’s in the vein of what I have been writing recently about ant mills and social media. Web 2.0 was built on commerce which was built on marketing. Good marketing is about persuasion (a product or service is good for the consumer), bad marketing is about manipulation (where a product or service is not good for the consumer). It’s hard to tell the difference between the two.

Inputs and Outputs.

We don’t know exactly how much of Web 2.0 was shoveled into the engines of generative AI learning models, but we do know that chatbots and generative AI have become considered more persuasive than humans. In fact, ChatGPT 4 is presently considered 82% more persuasive than humans, as I mentioned in my first AI roundup.

This should at least be a little disturbing, particularly when there are already sites telling people how to get GPT4 to create more persuasive content, such as this one, and yet the key difference between persuasion and manipulation is whether it’s good for the consumer of the information or not – a key problem with fake news.

Worse, we have all seen products and services that had brilliant marketing but were not good products or services. If you have a bunch of stuff sitting and collecting dust, you fell victim to marketing, and arguably, manipulation rather than persuasion.

It’s not difficult to see that the marketing of AI itself could be persuasive or manipulative. If you had a tool that could persuade people they need the tool, wouldn’t you use it? Of course you would. Do they need it? Ultimately, that’s up to the consumers, but if they in turn are generating AI content that feeds the learning models in what is known as synthetic data, it creates it’s own problems.

Critical Thought

Before generative AI became mainstream, we saw issues with people sharing fake news stories because they had catchy headlines and fed a confirmation bias. A bit of critical thought applied could have avoided much of that, but it still remained a problem. Web 2.0 to present has always been about getting eyes on content quickly so advertising impressions increased, and some people were more ethical about that than others.

Most people don’t really understand their own biases, but social media companies implicitly do – we tell them with our every click, our every scroll.

This is compounded by the scientific evidence that attention spans are shrinking. On average, based on research, the new average attention span is 47 seconds. That’s not a lot of time to do critical thinking before liking or sharing something.

While there’s no real evidence that there is more or less critical thought that could be found, the diminished average attention span is a solid indicator that on average, people are using less critical thought.

“…Consider how people approach many important decisions today. Humans are well known for being prone to a wide range of biases because we tend to be frugal when it comes to expending mental energy. This frugality leads people to like it when seemingly good or trustworthy decisions are made for them. And we are social animals who tend to value the security and acceptance of their communities more than they might value their own autonomy.

Add AI to the mix and the result is a dangerous feedback loop: The data that AI is mining to fuel its algorithms is made up of people’s biased decisions that also reflect the pressure of conformity instead of the wisdom of critical reasoning. But because people like having decisions made for them, they tend to accept these bad decisions and move on to the next one. In the end, neither we nor AI end up the wiser…”

The hidden risk of letting AI decide – losing the skills to choose for ourselves‘”, Joe Árvai, TheConversation, April 12, 2024

In an age of generative artificial intelligence that is here to stay, it’s paramount that we understand ourselves better as individuals and collectively so that we can make thoughtful decisions.

Facebook’s Algorithms Spamming Users.

If you haven’t left Facebook yet, as I have, you’ve probably noticed a lot of AI spam. I did when I was there and blocked a bunch of it (it was hard to keep up with).

Well, it isn’t just you.

“…What is happening, simply, is that hundreds of AI-generated spam pages are posting dozens of times a day and are being rewarded by Facebook’s recommendation algorithm. Because AI-generated spam works, increasingly outlandish things are going viral and are then being recommended to the people who interact with them. Some of the pages which originally seemed to have no purpose other than to amass a large number of followers have since pivoted to driving traffic to webpages that are uniformly littered with ads and themselves are sometimes AI-generated, or to sites that are selling cheap products or outright scams. Some of the pages have also started buying Facebook ads featuring Jesus or telling people to like the page “If you Respect US Army.”…”

Facebook’s Algorithm Is Boosting AI Spam That Links to AI-Generated, Ad-Laden Click Farms“, Jason Koebler, 404 Media, March 19, 2024

So not only are the algorithms arbitrarily restricting user accounts, as they did mine, but they’re feeding people with spam to an extent that it wasn’t just noticeable to an individual.

Meanwhile, Facebook has been buying GPUs to develop ‘next level’ AI, when in fact their algorithms are about as gullible as their GPU purchases are numerous.

Glad I left that platform.

Social Networks, Privacy, Revenue and AI.

I’ve seen more and more people leaving Facebook because their content just isn’t getting into timelines. It’s an interesting thing to consider the possibilities of. While some of the complaints about the Facebook algorithms are fun to read, it doesn’t really mean too much to write those sort of complaints. It’s not as if Facebook is going to change it’s algorithms over complaints.

As I pointed out to people, people using Facebook aren’t the customers. People using Twitter-X aren’t the customers either. To be a customer, you have to buy something. Who buys things on social networks? Advertisers are one, of course.

That’s something Elon Musk didn’t quite get the memo on. Why would he be this confidence? Hubris? Maybe, that always seems a factor, but it’s probably something more sensible.

Billionaires used to be much better spoken, it seems.

There’s something pretty valuable in social networks that people don’t see. It’s the user data, which is strangely what the canceled West World was about. The real value is in being able to predict what people want and influence outcomes, much as the television series showed after the first season.1

Many people seem to think that privacy is only about credit card information and personal details. It also includes choices that allow algorithms to predict choices. Humans are black boxes in this regard, and if you have enough computing power you can go around poking and prodding to see the results.

Have you noticed that these social networks are linked somehow to AI initiatives? Through Meta, Facebook is linked to AI initiatives of Meta. Musk, chief twit at X, has his fingers in the AI pie too.

Artificial intelligences need learning models, and if you own a social network, you not only get to poke and prod – you have the potential to influence. Are your future choices something that fall under privacy? Probably not – but your past choices probably should be because that’s how you get to predicting and influencing future choices.

I never really got into Twitter. Facebook was less interruptive. On the surface, these started off as content management systems that provided a service and had paid advertising to support it, yet now one has to wonder at the value of the user data. Back in 2018, Cambridge Analytics harvested data from 50 million Facebook users. Zuckerberg later apologized, and talked about how 3rd party apps would be limited. To his credit, I think it was handled pretty well.

Still, it also signaled how powerful and useful that data could be and if you own a social network, that would at least give you pause. After all, Cambridge Analytics influenced politics at the least, and that could have also influenced markets. The butterfly effect reins supreme in the age of big data and artificial intelligence.

This is why privacy is important in the age of artificial intelligence learning models, algorithms, and so forth. It can impact the responses one gets from any large language model, which is why there are pretty serious questions regarding privacy, copyright, and other things related to training them. Bias leaks into everything, and popular bias on social networks is simply about the most vocal and repetitive – not about what is actually correct. This is also why canceling as a culture phenomenon can also be so damaging. It’s a nuclear option in the world of information, and oddly, large groups of smart or stupid people can use it with impunity.

This is why we see large language models hedge on some questions presently, because of conflicts within the learning model as well as some well designed algorithms. In that we should be a little grateful.

We should probably lobbying to find out what is in these learning models that artificial intelligences are given in much the same way we used2 to grill people who would represent us collectively. Sure, Elon Musk might be taking a financial hit, but what if it’s a gambit to leverage user data for bigger returns later with his ethics embedded in how he gets his companies to do that?

You don’t have to like or dislike people to question them and how they use this data, but we should all be a bit concerned. Yes, artificial intelligence is pretty cool and interesting, but unleashed without question of the integrity of the information trained on is at the least foolish.

Be careful what you share, what you say, who you interact with and why. Quizzes that require access to your user profile are definitely questionable, as that information and information of people you are connected with quickly get folded into data creating a digital shadow of yourself, part of the larger crowd that can influence the now and the future.

  1. This is not to say it was canceled for this reason. I only recently watched it, and have yet to finish season 3, but it’s very compelling and topical content for the now. Great writing and acting. ↩︎
  2. We don’t seem to be that good at it grilling people these days, perhaps because of all of this and more. ↩︎

A Tale of Two AIs.

2023 has been the year where artificial intelligences went from science fiction to technology possibility. It’s become so ubiquitous that on Christmas Eve, chatting with acquaintances and friends, people from all walks of life were talking about it.

I found it disappointing, honestly, because it was pretty clear I was talking about one sort of artificial intelligence where others were talking about another sort of artificial intelligence.

One, a lawyer, mentioned that she’d had lunch with an artificial intelligence expert. On listening and with a few questions, she was talking about what sounded to be a power user of ChatGPT. When I started talking about some of the things I write about here related to artificial intelligence, she said that they had not discussed all of that. Apparently I went a bit too far because she then asked, “But do you use the latest version of ChatGPT that you have to pay for like this expert does?”

Well, yes, I do. I don’t use it to write articles and if I do use ChatGPT to write something, I quote it. I have my own illusions, I don’t need to take credit for any hallucinations ChatGPT has. I also don’t want to incorporate strategic deception in my writing. To me, it’s a novelty and something I often find flaws with. I’m not going to beat up ChatGPT, it has usefulness, but the fact that I can use DALL-E to generate some images, like above, is helpful.

What disturbed me is that she thought that was what an artificial intelligence expert does. That seems a pretty low bar; I wouldn’t claim to be an artificial intelligence expert because I spend $20/month. I’m exploring it like many others and stepping back to look at problematic consequences, of which there are many. If we don’t acknowledge and deal with those, the rest doesn’t seem to matter as much.

That’s the trouble. Artificial intelligence, when discussed or written about, falls into two main categories that co-exist.

Marketed AI.

The most prominent one is the marketing hype right now, where we get ‘experts’ who for whatever reason are claiming a title for being power users of stabs at artificial intelligence. This is what I believe Cory Doctorow wrote about with respect to the ‘AI bubble’. It’s more about perception than reality, in my mind, and in some ways it can be good because it gets people to spend money so that hopefully those that collect it can do something more about the second category.

Yet it wasn’t long ago that people were selling snake oil. In the last decades, I’ve seen ‘website experts’ become ‘social media experts’, and now suddenly we have ‘artificial intelligence experts’.

Actual Artificial Intelligence.

The second category is actually artificial intelligence itself, which I believe we may be getting closer to. It’s where expert systems, which have been around since the 1970s, have made some quantum leaps. When I look at ChatGPT, as an example, I see an inference engine (the code) and the knowledge base which is processed from a learning model. That’s oversimplified, I know, and one can get into semantic arguments, but conceptually it’s pretty close to reality.

If you take a large language model like ChatGPT and feed it only medical information, it can diagnose based on symptoms a patient has. Feed it only information on a programming language like COBOL, it can probably write COBOL code pretty well. ChatGPT has a learning model that we don’t really know, and it is apparently pretty diverse, which allows us to do a lot of pretty interesting things besides generating silly images on blog posts. I’ve seen some code in JavaScript done this way, and I just generated some C++ code as a quick test with ChatGPT 4 that, yes, works and it does something better than most programmers do: it documents how it works.

I’d written about software engineers needing to evolve too with respect to artificial intelligence.

It has potential to revolutionize everything, all walks of life, and it’s going to be really messy because it will change jobs and even replace them. It will be something that will have psychological and sociological consequences, impacting governments and the ways we do… everything.

The Mix of Marketed vs. Actual

The argument could be made that without marketing, businesses would not make enough money for the continued expense of pushing the boundaries of artificial intelligence. Personally, I think this is true. The trouble is that marketing takes over what people believe artificial intelligence is. This goes with what Doctorow wrote about the bubble as well as what Joe McKendrick wrote about artificial intelligence fading into the background. When the phrase is over-used and misused in businesses, which seems to already be happening, the novelty wears off and the bubble pops in business.

That’s kind of what happened with social media and ‘social media experts’.

The marketing aspect, too, also causes people to worry about their own jobs, which maybe they don’t want but they want income because there are bills to pay in modern society. The fear of some is tangible, and with good reason. All the large language models use a very broad brush in answering those fears, as do the CEOs of the companies: We’ll just retrain everyone. There are people getting closer to retirement, and what companies have been doing to save money and improve their stock performance is finding reasons to ‘let people go’, so that comfort is spoken from on high with the same sensitivity as, “Let them eat cake“. It’s dismissive and ignores the reality people live in.

Finding the right balance is hard when there’s no control of the environment. People are talking about what bubbles leave behind, but they don’t talk as much about who they leave behind. Harvard Business Review predicted that the companies that get rid of jobs with artificial intelligence will eventually get left behind, but eventually can be a long time and can have some unpredictable economic consequences.

‘Eventually’ can be a long time.

The balance must be struck by the technology leaders in artificial intelligence, and that seems to be about as unlikely as it was with the dot-com boom. Maybe Chat-GPT 4 can help them out if they haven’t been feeding it enough of their own claims.

And no, you aren’t an ‘artificial intelligence expert’ if you are a paid user of artificial intelligence of any platform alone, just like buying a subscription to a medical journal doesn’t make you a medical professional.

The Walls Have Ears.

Years ago, I had the then new Amazon echo, I had multiple Kindles, and I had a cough. A bad cough. A cough so bad that I ended up going to a hospital over and got some scary news about, which is a story by itself.

What was weird was that the Kindles started showing ads for cough drops and cough syrups. Just out of the blue. I hadn’t shopped for those on Amazon and I think it unlikely that they were getting updates from my pharmacy on my over the counter habits.

This was creepy.

I donated the Echo to someone else, and the Kindles started having advertisements for books that were semi-interesting again. No more over the counter stuff for coughs. This is purely anecdotal, but as someone who does value his privacy, I opted to simply not have it around. My life was complete without an Echo and I began questioning why I had gotten it in the first place.

Since then, I’ve just quietly nodded my head when people say that they think devices are listening to them. If poked with a stick, I tell the story. Mobile phones, with all the apps that use voice is a big hole.

Let’s be honest about ourselves: We are, collectively, pretty bad at settings and making sure we don’t leak information we don’t want to. It’s not completely our fault either. Staying on top of software settings when the software is in a constant state of being updated is not an easy task.

It ends up that people who have been concerned about it, as I am, may have a reason though it’s being denied:

...In a Nov. 28 blog post (which also has been deleted), CMG Local Solutions said its “Active Listening” technology can pick up conversations to provide local advertisers a weekly list of consumers who are in the market for a given product or service. Example it cited of what Active Listening can detect included “Do we need a bigger vehicle?”; “I feel like my lawyer is screwing me”; and “It’s time for us to get serious about buying a house.”

There’s a big question as to why someone would even make that claim in the first place without it being true. Maybe it was a drunk intern. Maybe it was an upset employee leaving with a ‘fish in the ceiling’1.

I could put on a tinfoil hat and say that the NSA probably has backdoors on every operating system made in the United States. It’s credible after 9/11, but when I write ‘after 9/11’ I realize there’s an entire generation who doesn’t remember how things were before. Before that, we were less concerned about who was listening in on us because the need to listen to everyone was much less. The word ‘terrorism’ had many different definitions in government then and almost none of them seemed to agree. It was a troublesome time for technology.

We have generations that are blindly trusting these technologies at this point because they’ve been raised on them much as I was raised on Sesame Street. Sesame Street, though, was not too interested in my shopping habits or trying to prep me for a market to buy a certain line of hardware, software, or subscription services. When you think about it, GenX was being sold on the idea of learning stuff whereas subsequent generations have been increasingly marketed to under the guise of education.

All of this should be something that is at least on our radars, something we understand as a possibility.

If the government is doing it, we can’t really depend on them to get companies not to – and we don’t know who is doing it at all.

It takes one story – a cough around an Echo – to make it feel real, if you’re paying attention.

  1. At one company I worked for, someone who had quit had left a few fish in the ceiling tiles in a cube farm. It took months for people to find out where the smell was coming from. ↩︎

Beyond The Bubble.

Cory Doctorow has said that AI is a bubble, which in some ways makes sense. After all, what is being marketed as artificial intelligence is pretty much a matter of statistics trained to give users what they want based on what they have wanted previously, collectively.

That, at least to me, isn’t really artificial intelligence as much as it’s math as smoke and mirrors giving the illusion of intelligence. That’s an opinion, of course, but when something you expect to give you what you want always gives you what you want, I’m not sure there is intelligence involved. It sounds more like subservience.

In fact, as a global society, we should probably be asking more of what we expect from artificial intelligences rather than having a handful of people dictate what comes next. Unfortunately, that’s not the way things seem to work with our global society.

The reality is, as Joe McKendrick pointed out, is that AI as marketed now will simply fade into the background, becoming invisible, which it already has. New problems arise from that, particularly around accountability.

I expanded on that in “The Invisible Future“.

Cory Doctorow is pretty much on the money despite being mocked in some places. It’s a marketing bubble about what has been marketed as artificial intelligence. What we have are useful tools at this point that can make some jobs obsolete, which says more about the jobs than anything else. If, for example, you think that a large language model can replace a human’s ability to communicate with other humans, you could be right to an extent – but virtual is not actual.

What will be left next year of all that has been marketed? The stuff behind the scenes, fading into the background, and which almost never is profitable itself.

Yet, where Cory Doctorow is a bit wrong is that now imaginations have been harnessed toward artificial intelligence, and maybe we will actually produce an intelligence that is an actual intelligence. Maybe, like little spores, they will help us expand our knowledge beyond ourselves, fitting them with sensors so that they can experience the world themselves rather than giving them regurgitated human knowledge.

After all, we create humans much more cheaply than we do artificial intelligences.

I think that might be a better thing to achieve, but… that’s just an opinion.

Bubbles Distilled By Time.

We all perceive the world through our own little bubbles. As far as our senses go, we only have touch, taste, feeling, hearing, smell and sight to go by. The rest comes from what we glean through those things, be it other people, technology, language, culture, etc.

If the bubble is too small, we feel it a prison and do our best to expand it. Once it’s comfortable, we don’t push it outward as much.

These little bubbles contain ideas that have passed down through the generations, how others have helped us translate our world and all that is in it, etc. We’re part of a greater distillation process, where because of our own limitations we can’t possibly carry everything from previous generations.

If we consider all the stuff that creates our bubble as little bubbles themselves that we pass on to the next generation, it’s a distillation of our knowledge and ideas over time. Some fall away, like the idea of the Earth being the center of the Universe. Some stay with us despite not being used as much as we might like – such as the whole concept of, ‘be nice to each other’.

If we view traffic as something going through time, bubbles are racing toward the future all at the same time, sometimes aggregating, sometimes not. The traffic of ideas and knowledge is distilled as we move forward in time, one generation at a time. Generally speaking, until broadcast media this was a very local process. Thus, red dots trying to get us to do things, wielded by those who wish us to do things from purchasing products to voting for politicians with their financial interests at heart.

Broadcast media made it global by at first giving people information and then by broadcasting opinions to become sustainable through advertising. Social media has become the same thing. How will artificial intelligences differ? Will ChatGPT suddenly spew out, “Eat at Joes!”? I doubt that.

However, those with fiscal interests can decide what the deep learning of artificial intelligences are exposed to. Machine learning is largely about clever algorithms and pruning the data that the algorithms are trained on, and those doing that are certainly not the most unbiased of humanity. I wouldn’t say that they are the most biased either – we’re all biased by our bubbles.

It’s Pandora’s Box. How do we decide what should go in and what should stay out? Well, we can’t, really. Nobody is actually telling us what’s in them now. Our education systems, too, show us that this is not necessarily something we’re good at.

Google, AI, and Search.

It’s no secret that Google is in the AI “arms race”, as it has been called, and there is some criticism that they’re in too much of a hurry.

“…The [AI] answer is displayed at the top, and on the left are links to sites from which it drew its answer. But this will look very different on the smaller screen of a mobile device. Users will need to scroll down to see those sources, never mind other sites that might be useful to their search.

That should worry both Google’s users and paying customers like advertisers and website publishers. More than 60% of Google searches in the US occur on mobile phones. That means for most people, Google’s AI answer will take up most of the phone screen. Will people keep scrolling around, looking for citations to tap? Probably not…”

Google Is in Too Much of a Hurry on AI Search, Parmy Olson, Bloomberg (via Washington Post), May 12th, 2023.

This could have a pretty devastating effect on Web 2.0 business models, which evolved around search engine results. That, in turn, could be bad for Google’s business model as it stands, which seems to indicate that their business model will be evolving soon too.

Will they go to a subscription model for users? It would be something that makes sense – if they didn’t have competition. They do. The other shoe on this has to drop. One thing we can expect from Google is that they have thought this through, and as an 800 lb gorilla that admonishes those that don’t follow standards, it will be interesting to see how the industry reacts.

It may change, and people are already advocating that somewhat.

“…Google Search’s biggest strength, in my opinion, was its perfect simplicity. Punch in some words, and the machine gives you everything the internet has to offer on the subject, with every link neatly cataloged and sorted in order of relevance. Sure, most of us will only ever click the first link it presents – god forbid we venture to the dark recesses of the second page of results – but that was enough. It didn’t need to change; it didn’t need this.

There’s an argument to be made that search AI isn’t for simple inquiries. It’s not useful for telling you the time in Tokyo right now, Google can do that fine already. It’s for the niche interrogations: stuff like ‘best restaurant in Shibuya, Tokyo for a vegan and a lactose intolerant person who doesn’t like tofu’. While existing deep-learning models might struggle a bit, we’re not that far off AIs being able to provide concise and accurate answers to queries like that…”

Cramming AI into search results proves Google has forgotten what made it good, Christian Guyton, TechRadar, 5/11/2023

Guyton’s article (linked above in the citation) is well worth the read in it’s entirety. It has pictures and everything.

The bottom line on all of this is that we don’t know what the AI’s are trained on, we don’t know how it’s going to affect business models for online publishers, and we don’t know if it’s actually going to improve the user experience.

Normalizing The Mob.

_web_cyberPrintingPressI was glancing around social media when I saw NPR’s, “Hard times Are Ahead for news sites and social media. Is this the end of Web 2.0?“:

…”The news industry didn’t really have a profit model other than trying to get eyeballs and earn digital advertising revenue,” said Courtney Radsch, who studies technology and media at UCLA. “But what we saw is that the tech platforms, specifically Google and Facebook, ended up controlling that digital advertising infrastructure.”…

I suppose now that NPR has caught up to reality on this, it’s time to beat it with an old printing press. It’s been a problem for at least one decade, perhaps two, and it impacts anyone trying to create content on the Internet. The more agile companies and individuals have been good at monetizing trends in what the mob wants, creating a digital Vox Populi which can be outright… outrageous.

A few days ago I saw a reel of a lesbian manscaping a sheep because it was popular. To be fair, I didn’t realize that this was necessary or had a technique involved, but in less than a minute I was taught a masterclass in shaving a male sheep’s nether regions… which, honestly, I could have lived without, but it was so outrageous I simply watched and said, “What an example of outrageous content”. I’m sure it has a niche market, but I am not the niche. It just popped up in my feed.

It goes further than that though.

The regular ‘media’ has become just as outrageous, with Tucker Carlson’s nether regions being shaved by Fox News, after being outed to having opinions different than those he expressed. I imagine he really didn’t have his nether regions shaved – I do not want to know – but he did get fired, which for a talking head is pretty much the same. That’s where these little economic bubbles come in, where Tucker Carlson likely made sure he made enough money for as long as he could selling people what they wanted even if it had nothing to do with truth. It’s marketing. And wherever he lands, I’m sure he’ll have a fluffy landing, perhaps made from the wool of the nether regions of a male sheep.

He’s going to be ok. The people he fleeced with their attention to his misdirection will range from upset to simply waiting for the next Great Faux Hope. That’s the way of media these days. You don’t have to tell anything even resembling the truth, you simply have to cash in fast enough to live the rest of your life sipping mai-tais. It helps if you have some truth in it so it’s defensible, but that is no longer necessary. Media has become fiction, which should be irritating fiction writers everywhere.

The news used to be pretty boring in the last century, but it was necessary to understand the world. Now, it’s impossible to understand the world because the people who distribute facts without embellishing are not as interesting to people, with the exception of comedians who have become this age’s philosophers. Thank you, George Carlin, wherever you are.

What’s happened with ‘web 2.0’ is what was bound to happen: the economy surrounding it is normalizing as the hype it used to have is being eaten voraciously by large language models trained on the hype-fest of Web 2.0, probably full of pithy marketing slogans whose psychology makes them powerful red dots, eating the most valuable resource that an individual has: time.

Now that resource is being used by ChatGPT, where they are given the illusion of creating content that amuses them. That hype will eventually fall away too since what the language models was trained on was content available on the web, which is full of so much psychological marketing that it’s similar to masturbation in giving us the results we want without involving another human.

The content of Web 2.0 was for the mob, marketing enough that a Cluetrain was created. But the products largely didn’t keep pace with the marketing, as usual. The new tech is just normalizing the old tech, which was normalizing what the mob wanted.

Same as it ever was. Do something different.