Images and AI by an Unartist.

I can’t draw to save my life, and I’ve been fortunate it’s never come to that. However, with AI, I can create images that I could not afford an artist to do for me, as I did for the RealityFragments ‘about’ page recently by doing what I am good – hacking text.

The thing is, I know great artists. I’ll not name drop, but I’ll say you see a lot of one couple’s work a lot in Scientific American and other magazines, just to give you an idea of the caliber of people I know in the space. When it comes to that stuff, I would gladly pay them to do some illustrations for me, but I can’t afford them, and using them would not be sustainable.

Of course, my editing skills have gotten better because of AI. For example, I got creative with some AI images generated with DeepAI and merged them together by some painstaking editing and then dropping that image on the first just so. I don’t use photoshop, and I never have, relying instead on Gimp and Paint.net.

So it’s official. The image at right was not Photoshopped.

When it comes to photos and that sort of stuff, I pay attention to what Mark Lyndersay does because he’s so much better than I at it. I’ve told him as much in person. So when he wrote about AI showing it’s creative potential on TechNewsTT, I paid attention.

It’s just that he’s on a superhighway and I’m still looking for an on ramp. Digital image editing is not my strength, and because of that I’m not willing to pay for tools that I may have more dormant than the George Foreman grill in a cupboard in the kitchen. You might have one too. How many of these things did we buy?

Recently I have been playing with an idea on the subscription Photoshop, though, and giving it a month. Thus when I saw a headline like, “With AI, Photoshopping images now requires zero skill” on a Saturday morning… I had to go look. Very good article, I recommend it.

Fortunately, it was dumb enough for me to understand and smart enough for me to work with. I think I’ll wait past the beta.

If I’m going to pay to experiment, I don’t want to be the experiment.

The Ongoing Saga of the ‘AI’pocalypse

I ran across an surprisingly well done article on the AIpocalypse thing, which I have written about before in ‘Artificial Extinction’, and it’s worth perusing.

“…In his testimony before Congress, Altman also said the potential for AI to be used to manipulate voters and target disinformation were among “my areas of greatest concern.”

Even in more ordinary use cases, however, there are concerns. The same tools have been called out for offering wrong answers to user prompts, outright “hallucinating” responses and potentially perpetuating racial and gender biases.”

Forget about the AI apocalypse. The real dangers are already here“, CNN, Catherine Thorbecke, June 16th, 2023.

Now, let me be plain here. When they say an AI is hallucinating, that’s not really true. Saying it’s ‘bullshitting’ would be closer to true, but it’s not even really that. It’s a gap in the training data and algorithms made apparent by the prompt you give it. It’s not hallucinating. They’re effectively anthropomorphizing some algorithms strapped to a thesaurus when they say, ‘hallucinating’.

They’re trying to make you hallucinate, maybe, if we go by possible intentions.

“…Emily Bender, a professor at the University of Washington and director of its Computational Linguistics Laboratory, told CNN said some companies may want to divert attention from the bias baked into their data and also from concerning claims about how their systems are trained.

Bender cited intellectual property concerns with some of the data these systems are trained on as well as allegations of companies outsourcing the work of going through some of the worst parts of the training data to low-paid workers abroad.

“If the public and the regulators can be focused on these imaginary science fiction scenarios, then maybe these companies can get away with the data theft and exploitative practices for longer,” Bender told CNN…”

Forget about the AI apocalypse. The real dangers are already here“, CNN, Catherine Thorbecke, June 16th, 2023.

We don’t like to talk about the intentions of people involved with these artificial intelligences and their machine learning. We don’t know what models are being used for the deep learning, and to cover that gap of trust, words like ‘hallucinating’ are much more sexy and dreamy than, “Well, it kinda blew a gasket there. We’ll see how we can patch that right up, but it can keep running while we do.”

I’m not saying that’s what’s happening, but it’s not a perspective that should be dismissed. There’s a lot at stake, after all, with artificial intelligence standing on the shoulders of humans, who are distantly related to kids who eat tide pods.

We ain’t perfick, and thus anything we create inherits that.

I think the last line of that CNN article sums it up nicely.

“…Ultimately, Bender put forward a simple question for the tech industry on AI: “If they honestly believe that this could be bringing about human extinction, then why not just stop?””

Forget about the AI apocalypse. The real dangers are already here“, CNN, Catherine Thorbecke, June 16th, 2023.

That professor just cut to the quick in a way that had me smile. She just straight out said it.

And.

When we talk about biases, and I’ve written about bias a lot lately, we don’t see all that is biased alone. In an unrelated article, Barbara Kingsolver, the only 2 time winner of the Women’s Prize for fiction, drew my attention to one that I hadn’t considered in the context of deep learning training data.

“…She is also surprisingly angry. “I understand why rural people are so mad they want to blow up the system,” she says. “That contempt of urban culture for half the country. I feel like I’m an ambassador between these worlds, trying to explain that if you want to have a conversation you don’t start it with the words, ‘You idiot.’”…”

Barbara Kingsolver: ‘Rural people are so angry they want to blow up the system’“, The Guardian, Lisa Allardice quoting Barbara Kingsolver, June 16th, 2023

She’s not wrong – and the bias is by omission, largely, on both the rural and urban sides (suburbia has a side too). So how does one deal with that in a training model for machine learning?

We’ve only scratched the surface, now haven’t we? Perhaps just scuffed.

Revisiting Interactivity on WordPress.com

While I was interacting in the RealityFragments Facebook page (join in!) about the use case issue of allowing users to log in to interact with content on RealityFragments and here, it occurred to me that WordPress probably does allow for people with Google accounts to log in.

WordPress does have a plugin for allowing Google accounts to log in. It exists. So I went over to my administrative page, mentally slapping myself on the forehead about it, when I found out that even though my account is premium (paid), I would need to upgrade to a WordPress.com business account for what is now $25/month. What?

This should be the default, even for free sites on WordPress.com, because people interacting with content is how people with weblogs grow, and when they grow, they might consider the tiered pay accounts.

Instead, effectively, they’re screwing themselves over – and their users – to try to force people to pay more when their monetization plans are at best… blech, particularly if you live outside of the geographic areas Stripe supports.

Just one plugin would cause more interactivity on sites. It should be a default. How annoying is that?

Now I have to go compare options before I renew my sites on WordPress.com, which has been otherwise trouble free but annoyingly myopic regarding monetization and usability for the users of their users. The readers.

Or maybe they will read this, have an Eureka moment, and change the way that they do things.

For now, please use the RealityFragments Facebook page to interact with content here, to stop in and say hi, and to meet others who are doing the same.

Science Meets Science Fiction: Giving AI Senses.

At present, the only information that these stabs at artificial intelligence get is through our curation. Some subset of humanity decides what goes into the training models and tweaks the algorithms, and then we get them to spit out gobs of text, images, and even video.

Personally, I’ve never seen much value in all of that since we produce human intelligence cheaply, with less of a carbon footprint and, if done right, more pleasurably. The increase in global population demonstrates that developing an intelligence like a human would be a bit redundant.

One of the best ways of explaining this is through Science Fiction – this quotation from Battlestar Galactica, where Brother Cavil (a Cylon, an artificial intelligence) is griping about how he was created.

“In all your travels, have you ever seen a star go supernova? …

I have. I saw a star explode and send out the building blocks of the Universe. Other stars, other planets and eventually other life. A supernova! Creation itself! I was there. I wanted to see it and be part of the moment. And you know how I perceived one of the most glorious events in the universe? With these ridiculous gelatinous orbs in my skull! With eyes designed to perceive only a tiny fraction of the EM spectrum. With ears designed only to hear vibrations in the air. …

I don’t want to be human! I want to see gamma rays! I want to hear X-rays! And I want to – I want to smell dark matter! Do you see the absurdity of what I am? I can’t even express these things properly because I have to – I have to conceptualize complex ideas in this stupid limiting spoken language! But I know I want to reach out with something other than these prehensile paws! And feel the wind of a supernova flowing over me! I’m a machine! And I can know much more! I can experience so much more. But I’m trapped in this absurd body! And why? Because my five creators thought that God wanted it that way!”

‘Brother Cavil’, written by Ronald D. Moore, BattleStar Galactica Series

Why would we want to simply replicate the senses we ourselves are stuck with? If you think about it, much of our modern knowledge comes from extending our senses by detecting things and representing them to our actual senses. X-rays are read by radiologists every day. Radar. Sonar.

Fox News has an article, “Artificial intelligence won’t likely reach human-like levels without this one key component, study finds“, and it refers to the sensor output but doesn’t get to the actual source. A bit of digging, and I found that TechExplore gave it up.

“In a paper published in Science Robotics, Professor Tony Prescott and Dr. Stuart Wilson from the University’s Department of Computer Science, say that AI systems are unlikely to resemble real brain processing no matter how large their neural networks or the datasets used to train them might become, if they remain disembodied.”

AI unlikely to gain human-like cognition, unless connected to real world through robots, says study“, University of Sheffield, TechXplore, June 12, 2023.

The actual paper is, “Understanding brain functional architecture through robotics“, and is unfortunately paywalled by Science.org.

The point is some smart Professors figured out what science fiction already had, but their premise is scientific whereas Science Fiction’s premise as I quote above was empathic. I could probably find at least 10 more in a day, and hundreds within a week. Maybe even thousands of science fiction references in a month.

Who wouldn’t want to be able to watch a star go supernova without harm?

But there’s much more to it than that. What if we could have an intelligent partner that could help us deal with stuff that is well beyond our senses, yet can communicate with us?

That’s the stuff of science fiction, and maybe soon, the stuff of science.

But then, where would we fit in?

Revisiting Design: The RealityFragments Like/Comment Use Case

Yesterday, I went on a bit of a spree on RealityFragments.com, with the results fairly summarized on the RealityFragments About Page. The reason for the spree was pretty simple.

There are some issues with design.

Some of it is implicit in WordPress.com. To ‘like’ or ‘comment’ on content, you require a WordPress.com account. It’s painful for non-WordPress.com users to do that when they’re used to logging into everything automagically – and it’s also necessary to avoid spam comments that link to websites that sell everything from ‘getting rich quick’ schemes to promises of increasing the prominence of one’s nether regions. It’s a hard balance.

And it’s kinda crappy design because we, collectively, haven’t figured out a better way to handle spammers. I could get into the failures of nations to work together on this, but if we go down that path we will be in the weeds for a very, very long time.

Suffice to say my concern is that of the readers. The users. And it brought to mind that yellow book by Donald A. Norman, the very color of the book being an example of good design. After all, that’s how I remember it.

“Design is really an act of communication, which means having a deep understanding of the person with whom the designer is communicating.”

Donald A. Norman, The Design of Everyday Things (2013)

This is where we who have spent time in the code caves get things wrong. Software Engineers are generally rational beings who expect everyone to be rational, and if we just got rid of irrational users “we would have a lot less problems!”.

I’ve spent about half a century on the planet at this point, and I will make a statement: By default, humans are irrational, and even those of us who consider ourselves rational are irrational in ways we… rationalize. Sooner or later, everyone comes to terms with this or dies very, very frustrated.

The problem I had is that I wasn’t getting feedback. The users can’t give it without giving WordPress.com the emotional equivalent of their first born child, apparently. Things have gotten faster and we want things more now-er. We all do. We want that instant gratification.

In the context of leaving a comment, if there are too many bells and whistles associated with doing it, the person forgets what they were going to comment about in the first place.

“The idea that a person is at fault when something goes wrong is deeply entrenched in society. That’s why we blame others and even ourselves… More and more often the blame is attributed to “human error.” The person involved can be fined, punished, or fired. Maybe training procedures are revised… But in my experience, human error usually is a result of poor design: it should be called system error. Humans err continually; it is an intrinsic part of our nature…. Worse, blaming the person without fixing the root, underlying cause does not fix the problem: the same error is likely to be repeated by someone else.”

Donald A. Norman, The Design of Everyday Things (2013)

The thing is – there is no good solution for this. None, whatsoever, mainly because the alternative that was already there had not occurred to the users. It’s posted on Facebook, on the RealityFragments page, where I mix content from here and RealityFragments. The posts can be easily interacted with on Facebook for those who use Facebook. Sure, it doesn’t show on the website, but that doesn’t matter as much to me as the interaction itself?

Factor in that it’s easy for my posts to get buried by Facebook algorithms, it becomes an issue as well.

Thus, I created the RealityFragments Group on Facebook. People join, they can wander into the group and discuss stuff asynchronously, instead of the doom scroll of content people are subjected to. My intention is for my content not to compete for attention in that way, because it simply can’t.

I don’t have images of models trying on ideas. I don’t have loads of kitten pictures, and I’m certainly not getting dressed up and do duck lips to try to convince people to read and interact with what I create. I am also, for the record, not willing to wear a bikini. You’re welcome.

This was less than ideal solution to the problem. Maybe.

Time will tell if I got it right, but many more technically minded people will say, “You could just manage your own content management system on a rented server.” This is absolutely true.

What’s also true is that I would then be on the hook for everything, and when a content management system needs love, it wants it now. Thus when I’m ready to start writing, I suddenly have to deal with administration issues and before you know it, I’ve forgotten what I wanted to write – just like the users that have to create an account on WordPress.com to comment or like. A mirror.

So this is a compromised solution. Maybe. Time will tell.

And if you want to interact with this post and can’t log in to WordPress, feel free to join the RealityFragments.com Facebook group. Despite it’s name, it’s also for KnowProSE.com

Trinidad and Tobago: Copying of IDs Continues.

When I wrote about the amount of photocopies made of IDs in Trinidad and Tobago, back in 2018, I failed to look forward because as most things in Trinidad and Tobago, we are so focused on the present which looks a lot like the past in other nations.

I’d considered writing a really long article about it, but instead, I’ll just do this off the cuff.

Here’s the thing. A lawyer called me this morning and wanted me to email him copies of 2 forms of ID so that he could help his client out with something, because in dealing with a government office, they would make copies of the ID to have on file because… because Trinidad and Tobago is displaced by about 20 years of bureaucracy. This bureaucracy only persists because of those who benefit from it.

In an age where Mark Lyndersay complains that Lensa thinks he should have hair, in the very same country, maybe sending digital copies of identification around could be considered a National Security issue. Or a banking fraud issue, since banks are one of the sore points for… well, everyone.

That it’s not criminal is only because it hasn’t been outlawed yet. It is, as we would say here, doltish.

It’s amazingly simple to do, particularly with the passing inspection identification cards get in the places that make copies of them. It wouldn’t be too hard to create an ID based on a template, a few images that could be done photorealistically by AI, and making sure it’s the right dimensions.

The bureaucracy can’t get out of it’s own way.

Internet Detritus.

Back in 1996 I was driving to work in the Clearwater, Florida area and saw a billboard to Brainbuzz.com, now viewable only through the Wayback Machine. I joined, and I ended up writing for them. Not around anymore.

They became CramSession.com, where I continued writing for them. I had roughly 100 articles I wrote for them about software engineering and C++ which are just… gone. Granted, that was over 2 decades ago, but it’s peculiar to live longer than all these companies that thrived during the Dot Com Bubble, which should be taught in high school now as a part of world history. It isn’t, of course, but it should.

Consciously, we distill good things and keep moving them forward, but sometimes because of copyright laws, things get orphaned in companies that closed their digital doors. Generations afterward, it’s hard to convey this lack of permanence to future generations because the capacity for things to last ‘forevah’ seems to be built into some social media, but it’s hidden away by algorithms which is effectively the same thing.

Sometimes bubbles of information get trapped in the walls of an imploded company. It could happen even to the present 800 lb gorillas on the Internet now. The future is one thing that nobody will tell you in their end of the year posts: It’s unpredictable. The world changes more and more rapidly and we forget how much gets left behind at times.

“When the Lilliputians first saw Gulliver’s watch, that “wonderful kind of engine…a globe, half silver and half of some transparent metal,” they identified it immediately as the god he worshiped. After all, “he seldom did anything without consulting it: he called it his oracle, and said it pointed out the time for every action in his life.” To Jonathan Swift in 1726 that was worth a bit of satire. Modernity was under way. We’re all Gullivers now. Or are we Yahoos?”

Faster: The Acceleration of Just About Everything, James Gleick, 2000.

What’s really funny about that quote is that Yahoo.com was more of a player in the search engine space back then. In fact, in 1998, Yahoo was the most popular search engine, and that it’s still around is actually a little impressive given all that happened after the DotCom Bubble popped. So the quote itself hasn’t aged that well which demonstrates the point I am making.

Nothing really lasts on the Internet, and even with the WayBack machine (thank you, Internet Archive!), much of what was is simply no longer, subject to what companies owned copyrights of the information, or a simple matter of what things have been kept around through what boils down to popularity.

And what’s popular isn’t always good. I submit to you any elected official you dislike to demonstrate that popularity is subjective – and on the Internet, popularity is largely about marketing and money spent toward that end. The Internet, as it stands, is the house that we built based on what made money.

That’s not particularly attractive.

In the end, it all sort of falls away. And coming generations will see it as well, some may have already begun seeing it.

Who decides what stays on the Internet? Why, we do of course, one click at a time.

Now imagine this fed into an artificial intelligence’s deep learning model. The machine learning would be taught only what has survived, not what has failed -and this could be seen as progress. I think largely it is, despite myself – but what important stuff do we leave behind?

We don’t know, because it ain’t there.

Our Own Wall.

One of the more profound biases we have when it comes to our technology is just how stupid we can be. Ignorant, too, because we often forget just how little we know in the grand scheme of things which is well beyond our sight at any time, no matter how well informed we are.

It’s the Dunning-Kruger effect at levels depending on which mob we talk about and what that particular mob is made up of. Are they more open minded than close minded? Are they open to surprises?

We always end up surprised by something, and that’s a good thing. We don’t get new knowledge without being surprised in some way.

To be surprised means that something has to come leaping out of the dense grass of our biases and attacks us or helps us in some way. Surprise is important.

Personally, I like being surprised because it means something is new to me.

I’m not writing about a chocolate cake lurking in the dark room, I’m writing about expecting a result and getting something different, though exploring a new chocolate cake is also something I don’t mind. No, what I’m writing about is that unexpected outcome that has you wondering why it was unexpected.

That leads us to find out why, and that’s where we get new knowledge from. Asking the right questions.

It occurs to me that in creating this marketing of ‘artificial intelligence’ that we’ve created idiots. I thought we had enough, but apparently we need more. They don’t ask questions. They are better informed than our idiots, mind you, but someone gets to pick what distilled learning model they’re informed about.

I call them idiots not because they give us answers, sometimes wrong, but because they don’t ask questions. They don’t learn. We have a fair amount of systems on the planet we created that are in stasis instead of learning, and we’ve added new ones to the list.

I expect the marketers will send out a catalog soon enough of dumb systems marketed as smart technology.

Meanwhile, new generations may forget questioning, and that seems like it’s something we shouldn’t forget.

Bubbles Distilled By Time.

We all perceive the world through our own little bubbles. As far as our senses go, we only have touch, taste, feeling, hearing, smell and sight to go by. The rest comes from what we glean through those things, be it other people, technology, language, culture, etc.

If the bubble is too small, we feel it a prison and do our best to expand it. Once it’s comfortable, we don’t push it outward as much.

These little bubbles contain ideas that have passed down through the generations, how others have helped us translate our world and all that is in it, etc. We’re part of a greater distillation process, where because of our own limitations we can’t possibly carry everything from previous generations.

If we consider all the stuff that creates our bubble as little bubbles themselves that we pass on to the next generation, it’s a distillation of our knowledge and ideas over time. Some fall away, like the idea of the Earth being the center of the Universe. Some stay with us despite not being used as much as we might like – such as the whole concept of, ‘be nice to each other’.

If we view traffic as something going through time, bubbles are racing toward the future all at the same time, sometimes aggregating, sometimes not. The traffic of ideas and knowledge is distilled as we move forward in time, one generation at a time. Generally speaking, until broadcast media this was a very local process. Thus, red dots trying to get us to do things, wielded by those who wish us to do things from purchasing products to voting for politicians with their financial interests at heart.

Broadcast media made it global by at first giving people information and then by broadcasting opinions to become sustainable through advertising. Social media has become the same thing. How will artificial intelligences differ? Will ChatGPT suddenly spew out, “Eat at Joes!”? I doubt that.

However, those with fiscal interests can decide what the deep learning of artificial intelligences are exposed to. Machine learning is largely about clever algorithms and pruning the data that the algorithms are trained on, and those doing that are certainly not the most unbiased of humanity. I wouldn’t say that they are the most biased either – we’re all biased by our bubbles.

It’s Pandora’s Box. How do we decide what should go in and what should stay out? Well, we can’t, really. Nobody is actually telling us what’s in them now. Our education systems, too, show us that this is not necessarily something we’re good at.