AI: Technology, Skepticism, and Your Underwear.

Here are two images depicting futuristic underwear powered by AI technology. The designs are sleek and modern, featuring smart fibers and sensors, with a minimalist setting to emphasize the advanced technology.
Here are two images depicting futuristic underwear powered by AI technology. The designs are sleek and modern, featuring smart fibers and sensors, with a minimalist setting to emphasize the advanced technology.

There’s a balance between technology and humanity that at least some of us think is out of desirable limits now. In fact, it’s been out of limits for some time, and to illustrate the fact I had DALL-E generate some images of AI powered underwear – because if technology were resting close to one’s nether regions, it might be something one would be more careful about – from privacy to quality control.

“…Those social media companies, again, offered convenience and an — as well know — to good to be true promise of free and open access. We closed our blogs, got in line and ceded control of our social graphs. Drawbridges were rolled up, ads increased and nobody left — at least not in droves. Everyone is there so everyone stays.

Journalists and newspapers were drawn in, promised an audience and were gifted with capricious intermediaries that destroyed the profession and industry.

We lost our handle on what is and was true, stopped having conversations and started yelling at their representations. It became easier to shout at someone on line than it was to have a healthier discourse..”

“The tech industry doesn’t deserve optimism it has earned skepticism”, Cory Dransfeldt, CoryD.Dev, May 6th, 2024

Cory writes quote poignantly in that article of the promises made by technology in the past. In that excerpt, he also alludes to what I call the ‘Red Dots‘ that keep us distracted, increasingly feral, and rob us of what is really important to us – or should be.

This melds well with the point in a Scientific American opinion I read today, particularly the aspect of AI and social media:

…A Facebook whistleblower made this all clear several years ago. To meet its revenue goals, the platform used AI to keep people on the platform longer. This meant finding the perfect amount of anger-inducing and provocative content, so that bullying, conspiracies, hate speech, disinformation and other harmful communications flourished. Experimenting on users without their knowledge, the company designed addictive features into the technology, despite knowing that this harmed teenage girls. A United Nations report labeled Facebook a “useful instrument” for spreading hate in an attempted genocide in Myanmar, and the company admitted the platform’s role in amplifying violence. Corporations and other interests can thus use AI to learn our psychological weaknesses, invite us to the most insecure version of ourselves and push our buttons to achieve their own desired ends…

AI Doesn’t Threaten Humanity. Its Owners Do” , Joseph Jones, Scientific American, May 6th, 2024.

Again, things I have written about regarding AI, which connects to social media, which connects to social media, which connects to you, gentle reader, your habits, your information, your privacy. In essence, your life.

I’d say we’re on the cusp of something, but it’s the same cusp. We can and should be skeptical of these companies trying to sell us on a future that they promise us but have never really given us.

There’s a flood of AI marketing, silly AI products, and sillier ideas about AI that confound me, like AI chatbots in the publishing industry so you can chat with a bot about a book you read.

There’s silly, and there’s worrisome, like teens becoming addicted to AI chatbots – likely because there’s less risk than dealing with other human beings, the possible feelings of rejection, the anxiety associated with it, and the very human aspect of…

Being human.

I have advocated and will continue to advocate for sensible use of technology, but I’m not seeing as much of it mixed in with the marketing campaigns that suck the air out of the room, filling it with a synthetic ether that has the potential to drown who we are rather than help us maximize who we can be.

We should be treating new technology like underwear we will put next to our nether regions. Carefully. Be picky. And if it’s too good to be true – it likely is. And if the person selling it to you is known for shortcuts, expect uncomfortable drafts around your nether regions.

I’d grown tired of writing about it, and thank those whose articles got mentioned for giving me that second wind.

DHS Artificial Intelligence Safety And Security Board Has Some Odd Appointments.

Now that we’ve seen that generative artificial intelligence can be trained ethically, without breaking copyright laws, the list of people to the DHS Artificial Intelligence Safety and Security Board seems less than ideal.

The Board is supposed to ‘advance AI’s responsible development and deployment’ (emphasis mine), yet some on that Board took shortcuts.

Shortcuts in relation to any national security issue seems like a bad thing.

Here’s the list.

There’s some dubious companies involved. The argument can be made – and it probably will – that the companies are a part of national infrastructure, but is it national infrastructure that controls the United States, or is it the other way around?

I don’t know that these picks are good or bad. I will say that there are some that, at least in the eyes of others, been irresponsible. That would fall under Demonstrated Unreliability.

Copyright, AI, And, It Doing It Ethically.

It’s no secret that the generative, sequacious artificial intelligences out there have copyright issues. I’ve written about it myself quite a bit.

It’s almost become cliche to mention copyright and AI in the same sentence, with Sam Altman having said that there would be no way to do generative AI without all that material – toward the end of this post, you’ll see that someone proved that wrong.

Copyright Wars pt. 2: AI vs the Public“, by Toni Aittoniemi in January of 2023, is a really good read on the problem as the large AI companies have sucked in content without permission. If an individual did it, the large companies doing it would call it ‘piracy’, but now, it’s… not? That’s crazy.

The timing of me finding Toni on Mastodon was perfect. Yesterday, I found a story on Wired that demonstrates some of what Toni wrote last year, where he posed a potential way to handle the legal dilemmas surrounding creator’s rights – we call it ‘copyright’ because someone was pretty unimaginative and pushed two words together for only one meaning.

In 2023, OpenAI told the UK parliament that it was “impossible” to train leading AI models without using copyrighted materials. It’s a popular stance in the AI world, where OpenAI and other leading players have used materials slurped up online to train the models powering chatbots and image generators, triggering a wave of lawsuits alleging copyright infringement.

Two announcements Wednesday offer evidence that large language models can in fact be trained without the permissionless use of copyrighted materials.

A group of researchers backed by the French government have released what is thought to be the largest AI training dataset composed entirely of text that is in the public domain. And the nonprofit Fairly Trained announced that it has awarded its first certification for a large language model built without copyright infringement, showing that technology like that behind ChatGPT can be built in a different way to the AI industry’s contentious norm.

“There’s no fundamental reason why someone couldn’t train an LLM fairly,” says Ed Newton-Rex, CEO of Fairly Trained. He founded the nonprofit in January 2024 after quitting his executive role at image-generation startup Stability AI because he disagreed with its policy of scraping content without permission….

Here’s Proof You Can Train an AI Model Without Slurping Copyrighted Content“, Kate Knibbs, Wired.com, March 20th, 2024

It struck me yesterday that a lot of us writing and communicating about the copyright issue didn’t address how it could be handled. It’s not that we don’t know that it couldn’t be handled, it’s just that we haven’t addressed it as much as we should. I went to sleep considering that and in the morning found that Toni had done much of the legwork.

What Toni wrote extends on the system:

…Any training database used to create any commercial AI model should be legally bound to contain an identity that can be linked to a real-world person if so required. This should extend to databases already used to train existing AI’s that do not yet have capabilities to report their sources. This works in two ways to better allow us to integrate AI in our legal frameworks: Firstly, we allow the judicial system to work it’s way with handling the human side of the equation instead of concentrating on mere technological tidbits. Secondly, a requirement of openness will guarantee researches to identify and question the providers of these technologies on issues of equality, fairness or bias in the training data. Creation of new judicial experts at this field will certainly be required from the public sector…

“Copyright Wars pt. 2: AI vs the Public”, Toni Aittoniemi, Gimulnautti, January 13th, 2023.

This is sort of like – and it’s my interpretation – a tokenized citation system built into a system. This would expand on what, as an example, Perplexity AI does by allowing style and ideas to have provenance.

This is some great food for thought for the weekend.

Introducing Sequacious AI

Sequacious AI will answer all of your questions based on what it has scraped from the Internet! It will generate images based on everything it sucked into it’s learning model manifold! It will change the way you do business! It will solve the world’s mysteries for you by regurgitating other people’s content persuasively!

You’ll beat your competitors who aren’t using it at just about anything!

Sequacious is 3.7 Megafloopadons1 above the industry standard in speed!

Terms and conditions may apply.2

Is this a new product? A new service?

Nope. It’s What You Have Already, it’s just named descriptively.

It’s a descriptor for what you already are getting, with an AI generated image that makes you feel comfortable with it combined with text that preys on anxieties related to competition, akin to nuclear weapons. It abuses exclamation marks.

The key is the word, “Sequacious“. Intellectually servile, devoid of independent or original thought. It simply connects words in answers based on what it is fed and how it’s programmed. That’s why the Internet is being mined for data, initially ignoring copyright and now maybe paying lip service to it, while even one’s actions on social media are being fought for at the national level.

And it really isn’t that smart. Consider the rendering of the DIKW pyramid by DALL-E. To those who don’t know anything about the DIKW pyramid, they might think it’s right (which is why I made sure to put on the image that it’s wrong).

Ignore the obvious typos DALL-E made.

It’s inverted. You’d think that an AI might get information science right. It takes a lot of data to make information, a lot of information to make knowledge, and a lot of knowledge to hopefully make wisdom.

Wisdom should be at the top – that would be wise3.

A more accurate representation of a DIKW pyramid, done to demonstrate (poorly) how much is needed to ascend each level.

Wisdom would also be that while the generative AIs we have are sequacious, or intellectually servile, we assume that it’s servile to each one of us. Because we are special, each one of us. We love that with a few keystrokes the word-puppetry will give us what we need, but that’s the illusion. It doesn’t really serve us.

It serves the people who are making money, or who want to know how to influence us. It’s servile to those who own them, by design – because that’s what we would do too. Sure, we get answers, we get images, and we get videos – but even our questions tell the AIs more about us than we may realize.

On Mastodon, I was discussing something a few days ago and they made the point that some company – I forget who, I think it was Google – anonymizes data, and that’s a fair point.

How many times have you pictured someone in your head and not known their name? Anonymized data can be like that. It’s descriptive enough to identify someone. In 2016, Google’s AI could tell exactly where an image was taken. Humans might be a little bit harder. It’s 2024 now, though.

While our own species wrestles it’s way to wisdom, don’t confuse data with information, information with knowledge, and knowledge with wisdom in this information age.

That would make you sequacious.

  1. Megafloopadons is not a thing, but let’s see if that makes it into a document somewhere. ↩︎
  2. This will have a lot of words that pretty much make it all a Faustian bargain, with every pseudo-victory being potentially Pyrrhic. ↩︎
  3. It’s interesting to consider that the inversion might be to avoid breaking someone’s copyright, and iit makes one wonder if that isn’t hard coded in somewhere. ↩︎

Paying To Whitewash The Fence of AI.

I suppose a lot of people may not have read Tom Sawyer since it has been banned here and there. Yet there is a part of the book that seems really appropriate today and is unfortunate people didn’t read. It’s a great con.

It’s in Chapter 2 that Tom Sawyer gets punished and has to whitewash a fence for his Aunt Polly, and when mocked about his punishment by another boy, he claims whitewashing the fence is fun. It’s so fun, in fact, that the other kid gives Tom an apple (an initial offer was the apple core, I believe), and so Tom pulled this con on other kids and got their treasures while they painted the fence. He got ‘rich’ and had fun at their expense while they did his penance.

That’s what’s happening with social media like Facebook, LinkedIn, Twitter, etc.

Videos, text, everything being generated on these social networks is being used to train generative AI that you can use for free – at least for now – while others pay and subscribe to get the better trained versions.

It’s a pretty good con that I suppose people didn’t read about. It’s a classic con.

Some people will complain when the AI’s start taking over whitewashing the fences, or start whitewashing their children.

Meanwhile, these same companies are selling metaphorical paint and brushes.

I suppose this is why reading is important.

Oddly, the premise of the ban of “The Adventures of Tom Sawyer” was “when librarians said they found Mr. Sawyer to be a “questionable” protagonist in terms of his moral character.”

Happy Painting.

Critical Thinking In The Age Of AI.

Critical thinking is the ability to suspend judgement, and to consider evidence, observations and perspectives in order to form a judgement, requiring rational, skeptical and unbiased analysis and evaluation.

It’s can be difficult, particularly being unbiased, rational and skeptical in a world that seems to require responses from us increasingly quickly.

Joe Árvai, a psychologist who has done research on decision making, recently wrote an article about critical thinking and artificial intelligence.

“…my own research as a psychologist who studies how people make decisions leads me to believe that all these risks are overshadowed by an even more corrupting, though largely invisible, threat. That is, AI is mere keystrokes away from making people even less disciplined and skilled when it comes to thoughtful decisions.”

The hidden risk of letting AI decide – losing the skills to choose for ourselves‘”, Joe Árvai, TheConversation, April 12, 2024

It’s a good article, well worth the read, and it’s in the vein of what I have been writing recently about ant mills and social media. Web 2.0 was built on commerce which was built on marketing. Good marketing is about persuasion (a product or service is good for the consumer), bad marketing is about manipulation (where a product or service is not good for the consumer). It’s hard to tell the difference between the two.

Inputs and Outputs.

We don’t know exactly how much of Web 2.0 was shoveled into the engines of generative AI learning models, but we do know that chatbots and generative AI have become considered more persuasive than humans. In fact, ChatGPT 4 is presently considered 82% more persuasive than humans, as I mentioned in my first AI roundup.

This should at least be a little disturbing, particularly when there are already sites telling people how to get GPT4 to create more persuasive content, such as this one, and yet the key difference between persuasion and manipulation is whether it’s good for the consumer of the information or not – a key problem with fake news.

Worse, we have all seen products and services that had brilliant marketing but were not good products or services. If you have a bunch of stuff sitting and collecting dust, you fell victim to marketing, and arguably, manipulation rather than persuasion.

It’s not difficult to see that the marketing of AI itself could be persuasive or manipulative. If you had a tool that could persuade people they need the tool, wouldn’t you use it? Of course you would. Do they need it? Ultimately, that’s up to the consumers, but if they in turn are generating AI content that feeds the learning models in what is known as synthetic data, it creates it’s own problems.

Critical Thought

Before generative AI became mainstream, we saw issues with people sharing fake news stories because they had catchy headlines and fed a confirmation bias. A bit of critical thought applied could have avoided much of that, but it still remained a problem. Web 2.0 to present has always been about getting eyes on content quickly so advertising impressions increased, and some people were more ethical about that than others.

Most people don’t really understand their own biases, but social media companies implicitly do – we tell them with our every click, our every scroll.

This is compounded by the scientific evidence that attention spans are shrinking. On average, based on research, the new average attention span is 47 seconds. That’s not a lot of time to do critical thinking before liking or sharing something.

While there’s no real evidence that there is more or less critical thought that could be found, the diminished average attention span is a solid indicator that on average, people are using less critical thought.

“…Consider how people approach many important decisions today. Humans are well known for being prone to a wide range of biases because we tend to be frugal when it comes to expending mental energy. This frugality leads people to like it when seemingly good or trustworthy decisions are made for them. And we are social animals who tend to value the security and acceptance of their communities more than they might value their own autonomy.

Add AI to the mix and the result is a dangerous feedback loop: The data that AI is mining to fuel its algorithms is made up of people’s biased decisions that also reflect the pressure of conformity instead of the wisdom of critical reasoning. But because people like having decisions made for them, they tend to accept these bad decisions and move on to the next one. In the end, neither we nor AI end up the wiser…”

The hidden risk of letting AI decide – losing the skills to choose for ourselves‘”, Joe Árvai, TheConversation, April 12, 2024

In an age of generative artificial intelligence that is here to stay, it’s paramount that we understand ourselves better as individuals and collectively so that we can make thoughtful decisions.

That 3rd AI: The Scapegoat.

This is an update of “A Tale of Two AIs” because, it ends up, there’s a third: The scapegoat.

This weekend, I was pointed at “‘The Machine Does It Coldly’: Artificial Intelligence Can Already Kill People” on Mastodon, which inspired a conversation and part of this post.

The trouble with headlines like that is that the amorphous and opaque blob of ‘Artificial Intelligence’ is blamed for killing people, removing the responsibility from those that (1) created and trained the artificial intelligence and, (2) used it.

The use case of artificial intelligence in war is nothing new. Ukraine saw the first announced use of artificial intelligence in war, from dealing with misinformation to controlling drones.

So, referring back to ‘A Tale of Two AIs’, we have the artificial intelligence that is marketed, the artificial intelligence that exists, and now the artificial intelligence as a scapegoat.

It’s not just war either. Janet Vertisi’s article highlights scapegoating of Amazon, Meta (Facebook inclusive) and Microsoft.

The story of AI distracts us from these familiar unpleasant scenes…

Don’t Be Fooled: Much “AI” is Just Outsourcing, Redux“, Janet Vertisi, TechPolicy.Press, Apr 4, 2024

Artificial intelligences don’t suddenly become conscious and bomb people. We humans do that, we tell them what they’re supposed to do and they do it – the day of a general artificial intelligence is not yet here.

We have to be careful in asserting our own accountability in the age of artificial intelligence. Our humanity depends on it.

From Inputs to The Big Picture: An AI Roundup

This started off as a baseline post regarding generative artificial intelligence and it’s aspects and grew fairly long because even as I was writing it, information was coming out. It’s my intention to do a ’roundup’ like this highlighting different focuses as needed. Every bit of it is connected, but in social media postings things tend to be written of in silos. I’m attempting to integrate since the larger implications are hidden in these details, and will try to stay on top of it as things progress.

It’s long enough where it could have been several posts, but I wanted it all together at least once.

No AI was used in the writing, though some images have been generated by AI.

The two versions of artificial intelligence on the table right now – the marketed and the reality – have various problems that make it seem like we’re wrestling a mating orgy of cephalopods.

The marketing aspect is a constant distraction, feeding us what helps with stock prices and good will toward those implementing the generative AIs, while the real aspect of these generative AIs is not really being addressed in a cohesive way.

To simplify this, this post breaks it down into the Input, the Output, and the impacts on the ecosystem the generative AIs work in.

The Input.

There’s a lot that goes into these systems other than money and water. There’s the information used for the learning models, the hardware needed, and the algorithms used.

The Training Data.

The focus so far has been on what goes into their training data, and that has been an issue including lawsuits, and less obviously, trust of the involved companies.

…The race to lead A.I. has become a desperate hunt for the digital data needed to advance the technology. To obtain that data, tech companies including OpenAI, Google and Meta have cut corners, ignored corporate policies and debated bending the law, according to an examination by The New York Times…

How Tech Giants Cut Corners to Harvest Data for A.I.“, Cade MetzCecilia KangSheera FrenkelStuart A. Thompson and Nico Grant, New York Times, April 6, 2024 1

Of note, too, is that Google has been indexing AI generated books, which is what is called ‘synthetic data’ and has been warned against, but is something that companies are planning for or even doing already, consciously and unconsciously.

Where some of these actions are questionably legal, they’re not as questionably ethical to some, thus the revolt mentioned last year against AI companies using content without permission. It’s of questionable effect because no one seems to have insight into what the training data consists of, and there seems no one is auditing them.

There’s a need for that audit, if only to allow for trust.

…Industry and audit leaders must break from the pack and embrace the emerging skills needed for AI oversight. Those that fail to address AI’s cascading advancements, flaws, and complexities of design will likely find their organizations facing legal, regulatory, and investor scrutiny for a failure to anticipate and address advanced data-driven controls and guidelines.

Auditing AI: The emerging battlefield of transparency and assessment“, Mark Dangelo, Thomson Reuters, 25 Oct 2023.

While everyone is hunting down data, no one seems to be seriously working on oversight and audits, at least in a public way, though the United States is pushing for global regulations on artificial intelligence at the UN. The status of that hasn’t seemed to have been updated, even as artificial intelligence is being used to select targets in at least 2 wars right now (Ukraine and Gaza).

There’s an imbalance here that needs to be addressed. It would be sensible to have external auditing of learning data models and the sources, as well as the algorithms involved – and just get get a little ahead, also for the output. Of course, these sorts of things should be done with trading on stock markets as well, though that doesn’t seem to have made as much headway in all the time that has been happening either.

Some websites are trying to block AI crawlers, and it is an ongoing process. Blocking them requires knowing who they are and doesn’t guarantee bad actors might not stop by.

There is a new Bill that being pressed in the United States, the Generative AI Copyright Disclosure Act, that is worth keeping an eye on:

“…The California Democratic congressman Adam Schiff introduced the bill, the Generative AI Copyright Disclosure Act, which would require that AI companies submit any copyrighted works in their training datasets to the Register of Copyrights before releasing new generative AI systems, which create text, images, music or video in response to users’ prompts. The bill would need companies to file such documents at least 30 days before publicly debuting their AI tools, or face a financial penalty. Such datasets encompass billions of lines of text and images or millions of hours of music and movies…”

New bill would force AI companies to reveal use of copyrighted art“, Nick Robins-Early, TheGuardian.com, April 9th, 2024.

Given how much information is used by these companies already from Web 2.0 forward, through social media websites such as Facebook and Instagram (Meta), Twitter, and even search engines and advertising tracking, it’s pretty obvious that this would be in the training data as well.

The Algorithms.

The algorithms for generative AI are pretty much trade secrets at this point, but one has to wonder at why so much data is needed to feed the training models when better algorithms could require less. Consider a well read person could answer some questions, even as a layperson, with less of a carbon footprint. We have no insight into the algorithms either, which makes it seem as though these companies are simply throwing more hardware and data at the problem than being more efficient with the data and hardware that they already took.

There’s not much news about that, and it’s unlikely that we’ll see any. It does seem like fuzzy logic is playing a role, but it’s difficult to say to what extent, and given the nature of fuzzy logic, it’s hard to say whether it’s implementation is as good as it should be.

The Hardware

Generative AI has brought about an AI chip race between Microsoft, Meta, Google, and Nvidia, which definitely leaves smaller companies that can’t afford to compete in that arena at a disadvantage so great that it could be seen as impossible, at least at present.

The future holds quantum computing, which could make all of the present efforts obsolete, but no one seems interested in waiting around for that to happen. Instead, it’s full speed ahead with NVIDIA presently dominating the market for hardware for these AI companies.

The Output.

One of the larger topics that has seemed to have faded is regarding what was called by some as ‘hallucinations’ by generative AI. Strategic deception was also something that was very prominent for a short period.

There is criticism that the algorithms are making the spread of false information faster, and the US Department of Justice is stepping up efforts to go after the misuse of generative AI. This is dangerous ground, since algorithms are being sent out to hunt products of other algorithms, and the crossfire between doesn’t care too much about civilians.2

The impact on education, as students use generative AI, education itself has been disrupted. It is being portrayed as an overall good, which may simply be an acceptance that it’s not going away. It’s interesting to consider that the AI companies have taken more content than students could possibly get or afford in the educational system, which is something worth exploring.

Given that ChatGPT is presently 82% more persuasive than humans, likely because it has been trained on persuasive works (Input; Training Data), and since most content on the internet is marketing either products, services or ideas, that was predictable. While it’s hard to say how much content being put into training data feeds on our confirmation biases, it’s fair to say that at least some of it is. Then there are the other biases that the training data inherits through omission or selective writing of history.

There are a lot of problems, clearly, and much of it can be traced back to the training data, which even on a good day is as imperfect as our own imperfections, it can magnify, distort, or even be consciously influenced by good or bad actors.

And that’s what leads us to the Big Picture.

The Big Picture

…For the past year, a political fight has been raging around the world, mostly in the shadows, over how — and whether — to control AI. This new digital Great Game is a long way from over. Whoever wins will cement their dominance over Western rules for an era-defining technology. Once these rules are set, they will be almost impossible to rewrite…

Inside the shadowy global battle to tame the world’s most dangerous technology“, Mark Scott, Gian Volpicelli, Mohar Chatterjee, Vincent Manancourt, Clothilde Goujard and Brendan Bordelon, Politico.com, March 26th, 2024

What most people don’t realize is that the ‘game’ includes social media and the information it provides for training models, such as what is happening with TikTok in the United States now. There is a deeper battle, and just perusing content on social networks gives data to those building training models. Even WordPress.com, where this site is presently hosted, is selling data, though there is a way to unvolunteer one’s self.

Even the Fediverse is open to data being pulled for training models.

All of this, combined with the persuasiveness of generative AI that has given psychology pause, has democracies concerned about the influence. A recent example is Grok, Twitter X’s AI for paid subscribers, fell victim to what was clearly satire and caused a panic – which should also have us wondering about how we view intelligence.

…The headline available to Grok subscribers on Monday read, “Sun’s Odd Behavior: Experts Baffled.” And it went on to explain that the sun had been, “behaving unusually, sparking widespread concern and confusion among the general public.”…

Elon Musk’s Grok Creates Bizarre Fake News About the Solar Eclipse Thanks to Jokes on X“, Matt Novak, Gizmodo, 8 April 2024

Of course, some levity is involved in that one whereas Grok posting that Iran had struck Tel Aviv (Israel) with missiles seems dangerous, particularly when posted to the front page of Twitter X. It shows the dangers of fake news with AI, deepening concerns related to social media and AI and should be making us ask the question about why billionaires involved in artificial intelligence wield the influence that they do. How much of that is generated? We have an idea how much it is lobbied for.

Meanwhile, Facebook has been spamming users and has been restricting accounts without demonstrating a cause. If there were a video tape in a Blockbuster on this, it would be titled, “Algorithms Gone Wild!”.

Journalism is also impacted by AI, though real journalists tend to be rigorous in their sources. Real newsrooms have rules, and while we don’t have that much insight into how AI is being used in newsrooms, it stands to reason that if a newsroom is to be a trusted source, they will go out of their way to make sure that they are: They have a vested interest in getting things right. This has not stopped some websites parading as trusted sources disseminating untrustworthy information because, even in Web 2.0 when the world had an opportunity to discuss such things at the World Summit on Information Society, the country with the largest web presence did not participate much, if at all, at a government level.

Then we have the thing that concerns the most people: their lives. Jon Stewart even did a Daily Show on it, which is worth watching, because people are worried about generative AI taking their jobs with good reason. Even as the Davids of AI3 square off for your market-share, layoffs have been happening in tech as they reposition for AI.

Meanwhile, AI is also apparently being used as a cover for some outsourcing:

Your automated cashier isn’t an AI, just someone in India. Amazon made headlines this week for rolling back its “Just Walk Out” checkout system, where customers could simply grab their in-store purchases and leave while a “generative AI” tallied up their receipt. As reported by The Information, however, the system wasn’t as automated as it seemed. Amazon merely relied on Indian workers reviewing store surveillance camera footage to produce an itemized list of purchases. Instead of saving money on cashiers or training better systems, costs escalated and the promise of a fully technical solution was even further away…

Don’t Be Fooled: Much “AI” is Just Outsourcing, Redux“, Janet Vertesi, TechPolicy.com, Apr 4, 2024

Maybe AI is creating jobs in India by proxy. It’s easy to blame problems on AI, too, which is a larger problem because the world often looks for something to blame and having an automated scapegoat certainly muddies the waters.

And the waters of The Big Picture of AI are muddied indeed – perhaps partly by design. After all, those involved are making money, they have now even better tools to influence markets, populations, and you.

In a world that seems to be running a deficit when it comes to trust, the tools we’re creating seem to be increasing rather than decreasing that deficit at an exponential pace.

  1. The full article at the New York Times is worth expending one of your free articles, if you’re not a subscriber. It gets into a lot of specifics, and is really a treasure chest of a snapshot of what companies such as Google, Meta and OpenAI have been up to and have released as plans so far. ↩︎
  2. That’s not just a metaphor, as the Israeli use of Lavender (AI) has been outed recently. ↩︎
  3. Not the Goliaths. David was the one with newer technology: The sling. ↩︎

Past, Present, and Future: Some thoughts On Intelligence.

One of the underlying concepts of Artificial Intelligence, as the name suggests, is intelligence. A definition of intelligence that fits this bit of writing is from a John Hopkins Q&A:

“…Intelligence can be defined as the ability to solve complex problems or make decisions with outcomes benefiting the actor, and has evolved in lifeforms to adapt to diverse environments for their survival and reproduction. For animals, problem-solving and decision-making are functions of their nervous systems, including the brain, so intelligence is closely related to the nervous system…”

Q&A – What Is Intelligence?“, Daeyeol Lee PhD, as quoted by Annika Weder, 5 October 2020.

This definition fits well, because in all the stuff about different writings related to different kinds of intelligences and human intelligence itself, the words of Arthur C. Clarke echo.

I’m not saying that what he wrote is right as much as it should make us think. He was good about making people think. The definition of intelligence above actually stands Clarke’s quote on it’s head because it ties intelligence to survival. In fact, if we are going to really discuss intelligence, the only sort of intelligence that matter is related to survival. It’s not about the individual as much as the species.

We only talk about intelligence in other ways because of our society, the education system, and it’s largely self-referential in those regards. Someone who can solve complex physics equations might be in a tribe in the Amazon right now, but if they can’t hunt or add value to their tribe, all of that intelligence – as high as some might think it is – means nothing. Their tribe might think of that person as the tribal idiot.

It’s about adapting and survival. This is important because of a paper that I read last week that gave me pause about the value-laden history of intelligence that causes the discussion of intelligence to fold in on itself:

“This paper argues that the concept of intelligence is highly value-laden in ways that impact on the field of AI and debates about its risks and opportunities. This value-ladenness stems from the historical use of the concept of intelligence in the legitimation of dominance hierarchies. The paper first provides a
brief overview of the history of this usage, looking at the role of intelligence in patriarchy, the logic of colonialism and scientific racism. It then highlights five ways in which this ideological
legacy might be interacting with debates about AI and its risks and opportunities: 1) how some aspects of the AI debate perpetuate the fetishization of intelligence; 2) how the fetishization of intelligence impacts on diversity in the technology industry; 3) how certain hopes for AI perpetuate notions of technology and the mastery of nature; 4) how the association of intelligence with the professional class misdirects concerns about AI; and 5) how the equation of intelligence and dominance fosters fears of superintelligence. This paper therefore takes a first step in bringing together the literature on
intelligence testing, eugenics and colonialism from a range of disciplines with that on the ethics and societal impact of AI.”

The Problem with Intelligence: Its Value-Laden History and the Future of AI” (Abstract), Stephen Cave, Leverhulme Centre for the Future of Intelligence University of Cambridge, 07 February 2020.

It’s a thought provoking read, and one with some basis, citing examples from what should be considered the dark ages of society that still perpetuate within modern civilization in various ways. One image can encapsulate much of the paper:

Source: https://dl.acm.org/doi/10.1145/3375627.3375813

The history of how intelligence has been used, and even become an ideology, has deep roots that go back in the West as far back as Plato. It’s little wonder that there is apparent rebellion against intelligence in modern society.

I’ll encourage people to read the paper itself – it has been cited numerous times. It lead me to questions about how this will impact learning models, since much that is out there inherits much of the value laden history demonstrated in the paper.

When we talk about intelligence of any sort, what exactly are we talking about? And when we discuss artificial intelligence, what man-made parts should we take with a grain of salt?

If the thought doesn’t bother you, maybe it should, because the only real intelligence that seems to matter is related to survival – and using intelligence ideologically is about the survival of those that prosper in the systems impacted by the ideology of intelligence – which includes billionaires, these days.

The Battle For Your Habits.

Found floating around today in the wild. As an atheist that doesn’t use Chrome, I know he ain’t talking to me.

There are some funny memes going around about TikTok and… Chinease spyware, or what have you. The New York Times had a few articles on TikTok last week that were interesting and yet… missed a key point that the memes do not.

Being afraid of Chinese Spyware while so many companies have been spying on their customers seems more of a bias than anything.

Certainly, India got rid of TikTok and has done better for it. Personally, I don’t like giving my information to anyone if I can help it, but these days it can’t be helped. Why is TikTok an issue in the United States?

It’s not too hard to speculate that it’s about lobbyism of American tech companies who lost the magic sauce for this generation. It’s also not hard to consider India’s reasoning about China being able to push their own agenda, particularly with violence on their shared borders.

Yet lobbying from the American tech companies is most likely, because they want your data and don’t want you to give it to China. They want to be able to sell you stuff based on what you’ve viewed, liked, posted, etc. So really, it’s not even about us.

It’s about the data that we give away daily when browsing social networks of any sort, websites, or even when you think you’re being anonymous using Google Chrome when in fact you’re still being tracked. The people who are advocating banning TikTok aren’t holding anyone else’s feet to the fire, instead using the ‘they will do stuff with your information’ when in fact we’ve had a lot of bad stuff happen with our information over the years.

Found circulating as a meme, which lead me to check out StoneToss.com – some really great work there.

Since 9/11, in particular, the US government has taken a pretty big interest in electronic trails, all in the interest in National Security, with the FBI showing up after the Boston Marathon bombing just because people were looking at pressure cookers.

All of this information will get possibly get poured into learning models for artificial intelligences, too. Even WordPress.com volunteered people’s blogs rather than asked for volunteers.

What value do you get for that? They say you get better advertising, which is something that I boggle at. Have you ever heard anyone wish that they could see better advertising rather than less advertising?

They say you get the stuff you didn’t even know you wanted, and to a degree, that might be true, but the ability to just go browse things has become a lost art. Just about everything you see on the flat screen you’re looking at is because of an algorithm deciding for you what you should see. Thank you for visiting, I didn’t do that!

Even that system gets gamed. This past week I got a ‘account restriction’ from Facebook for reasons that were not explained other than instructions to go read the community standards because algorithms are deciding based on behaviors that Facebook can’t seem to explain. Things really took off with that during Covid, where even people I knew were spreading some wrong information because they didn’t know better and, sometimes, willfully didn’t want to know better or understand their own posts in a broader context.

Am I worried about TikTok? Nope. I don’t use it. If you do use TikTok, you should. But you should worry if you use any social network. It’s not as much about who is selling and reselling information about you as much as what they can do with it to control what you see.

Of course, most people on those platforms don’t see them for what they are, instead taking things at face value and not understanding the implications it has on choices they will have in the future that could range from advertising to content that one views.

China’s not our only problem.