Tailoring Social Networks To Microlearning.

On Facebook, there’s been a deluge of advertising about micro-learning instead of ‘doom scrolling‘. I learned long ago to pick and choose the social media that I like, which happens to be about all the things I find interesting.

In a way, I already have my networks tailored for micro-learning. Sure, I have the goofy stuff, but in the right ratios, you can fit in microlearning by simply finding trustworthy sources of knowledge.

You don’t have to pay anyone to do this. The down side is you also have to check your sources a lot because trustworthy sources on the Internet are not always easy to find.

Pro tip: Look for the ones who correct their mistakes or recognizes their limitations, who cite their sources, and who challenge you.

It’s your time. Don’t waste it on their stuff unless you’re interested in their stuff.

Do this for a while and your social networks will allow you to microlearn without paying someone, and learn topics that you’re interested in. For me, that covers a lot, and I’m always looking for things that challenge my own biases and thoughts.

Pro tip: if you’re just going to regurgitate someone else’s ideas, at least pick some good ideas.

Writing Software Nostalgia.

It wasn’t long ago I mentioned that I had picked up Scrivener, and I’m enjoying it. It’s very much becoming a valuable tool for me as I plod away. I make my writing mistakes much more quickly now, and I can correct them much more quickly mainly because of the research capacity of the software.

I’m not easily impressed. It’s pretty cool, though not as cool as Stephen King balancing a typewriter on his knees in a trailer laundry area cool. It’s also not as cool as the picture to the left, though the trouble with that typewriter, as romantic as it is when it comes to writing, is the incessant clickety-clacking and dinging. It’s been so long since I’ve written on a typewriter that I’m unsure why anyone ever thought it was a good idea.

Clickety clickety clack clickety clickety clack ding.

That being said, I remember the first word processors on computers and it was pretty awesome, particularly because you didn’t have to use correction fluid (‘Whiteout’, as I called it) and didn’t have to worry about lining up pages just so. Generations after GenX will never truly appreciate what a quantum leap it was to be able to write an entire document and save it before printing it, allowing you all manner of editing ability after writing. Of course, when we went to write something, we had to have the idea, we had to be organized, and we had to have some physical endurance. You got strong fingers on a typewriter.

That’s why when I saw Robert J. Sawyer’s article on Wordstar, I chuckled a bit. Wordstar was a part of that shift and while Microsoft and everyone else seems to be packing more and more features into their menus, Wordstar did get it right decades ago. 4 decades ago.

40 years!

For most writers, that’s all that they really needed, as Sawyer, a Hugo and Nebula winner, well documents. Do you like George R. R. Martin’s books? You know, that Game of Thrones guy? Yup, he still uses Wordstar.

What Wordstar lacked – what it only lacked – was the document organization that I needed because my mind is sometimes not as organized as I would like it to be. I envy people who are consistently organized that way.

Yet here’s a fun thing to consider: If Wordstar got it right 40 years ago, why have people been buying software that didn’t fit what they needed? Office Suites, where you could work with spreadsheets and such. The ‘one office suite to in the darkness bind them’ marketing campaign, where it was like Oprah was handing out features. You get a feature! You get a feature! All of you get features!

The future of writing software, though, is a curious thing to consider given all these large language models out there. Everyone’s trying a new gimmick, it seems, and out of the terabytes of garbage generated, we might get something good – but it probably won’t last as long as Wordstar.

A Tale of Two AIs.

2023 has been the year where artificial intelligences went from science fiction to technology possibility. It’s become so ubiquitous that on Christmas Eve, chatting with acquaintances and friends, people from all walks of life were talking about it.

I found it disappointing, honestly, because it was pretty clear I was talking about one sort of artificial intelligence where others were talking about another sort of artificial intelligence.

One, a lawyer, mentioned that she’d had lunch with an artificial intelligence expert. On listening and with a few questions, she was talking about what sounded to be a power user of ChatGPT. When I started talking about some of the things I write about here related to artificial intelligence, she said that they had not discussed all of that. Apparently I went a bit too far because she then asked, “But do you use the latest version of ChatGPT that you have to pay for like this expert does?”

Well, yes, I do. I don’t use it to write articles and if I do use ChatGPT to write something, I quote it. I have my own illusions, I don’t need to take credit for any hallucinations ChatGPT has. I also don’t want to incorporate strategic deception in my writing. To me, it’s a novelty and something I often find flaws with. I’m not going to beat up ChatGPT, it has usefulness, but the fact that I can use DALL-E to generate some images, like above, is helpful.

What disturbed me is that she thought that was what an artificial intelligence expert does. That seems a pretty low bar; I wouldn’t claim to be an artificial intelligence expert because I spend $20/month. I’m exploring it like many others and stepping back to look at problematic consequences, of which there are many. If we don’t acknowledge and deal with those, the rest doesn’t seem to matter as much.

That’s the trouble. Artificial intelligence, when discussed or written about, falls into two main categories that co-exist.

Marketed AI.

The most prominent one is the marketing hype right now, where we get ‘experts’ who for whatever reason are claiming a title for being power users of stabs at artificial intelligence. This is what I believe Cory Doctorow wrote about with respect to the ‘AI bubble’. It’s more about perception than reality, in my mind, and in some ways it can be good because it gets people to spend money so that hopefully those that collect it can do something more about the second category.

Yet it wasn’t long ago that people were selling snake oil. In the last decades, I’ve seen ‘website experts’ become ‘social media experts’, and now suddenly we have ‘artificial intelligence experts’.

Actual Artificial Intelligence.

The second category is actually artificial intelligence itself, which I believe we may be getting closer to. It’s where expert systems, which have been around since the 1970s, have made some quantum leaps. When I look at ChatGPT, as an example, I see an inference engine (the code) and the knowledge base which is processed from a learning model. That’s oversimplified, I know, and one can get into semantic arguments, but conceptually it’s pretty close to reality.

If you take a large language model like ChatGPT and feed it only medical information, it can diagnose based on symptoms a patient has. Feed it only information on a programming language like COBOL, it can probably write COBOL code pretty well. ChatGPT has a learning model that we don’t really know, and it is apparently pretty diverse, which allows us to do a lot of pretty interesting things besides generating silly images on blog posts. I’ve seen some code in JavaScript done this way, and I just generated some C++ code as a quick test with ChatGPT 4 that, yes, works and it does something better than most programmers do: it documents how it works.

I’d written about software engineers needing to evolve too with respect to artificial intelligence.

It has potential to revolutionize everything, all walks of life, and it’s going to be really messy because it will change jobs and even replace them. It will be something that will have psychological and sociological consequences, impacting governments and the ways we do… everything.

The Mix of Marketed vs. Actual

The argument could be made that without marketing, businesses would not make enough money for the continued expense of pushing the boundaries of artificial intelligence. Personally, I think this is true. The trouble is that marketing takes over what people believe artificial intelligence is. This goes with what Doctorow wrote about the bubble as well as what Joe McKendrick wrote about artificial intelligence fading into the background. When the phrase is over-used and misused in businesses, which seems to already be happening, the novelty wears off and the bubble pops in business.

That’s kind of what happened with social media and ‘social media experts’.

The marketing aspect, too, also causes people to worry about their own jobs, which maybe they don’t want but they want income because there are bills to pay in modern society. The fear of some is tangible, and with good reason. All the large language models use a very broad brush in answering those fears, as do the CEOs of the companies: We’ll just retrain everyone. There are people getting closer to retirement, and what companies have been doing to save money and improve their stock performance is finding reasons to ‘let people go’, so that comfort is spoken from on high with the same sensitivity as, “Let them eat cake“. It’s dismissive and ignores the reality people live in.

Finding the right balance is hard when there’s no control of the environment. People are talking about what bubbles leave behind, but they don’t talk as much about who they leave behind. Harvard Business Review predicted that the companies that get rid of jobs with artificial intelligence will eventually get left behind, but eventually can be a long time and can have some unpredictable economic consequences.

‘Eventually’ can be a long time.

The balance must be struck by the technology leaders in artificial intelligence, and that seems to be about as unlikely as it was with the dot-com boom. Maybe Chat-GPT 4 can help them out if they haven’t been feeding it enough of their own claims.

And no, you aren’t an ‘artificial intelligence expert’ if you are a paid user of artificial intelligence of any platform alone, just like buying a subscription to a medical journal doesn’t make you a medical professional.

The Walls Have Ears.

Years ago, I had the then new Amazon echo, I had multiple Kindles, and I had a cough. A bad cough. A cough so bad that I ended up going to a hospital over and got some scary news about, which is a story by itself.

What was weird was that the Kindles started showing ads for cough drops and cough syrups. Just out of the blue. I hadn’t shopped for those on Amazon and I think it unlikely that they were getting updates from my pharmacy on my over the counter habits.

This was creepy.

I donated the Echo to someone else, and the Kindles started having advertisements for books that were semi-interesting again. No more over the counter stuff for coughs. This is purely anecdotal, but as someone who does value his privacy, I opted to simply not have it around. My life was complete without an Echo and I began questioning why I had gotten it in the first place.

Since then, I’ve just quietly nodded my head when people say that they think devices are listening to them. If poked with a stick, I tell the story. Mobile phones, with all the apps that use voice is a big hole.

Let’s be honest about ourselves: We are, collectively, pretty bad at settings and making sure we don’t leak information we don’t want to. It’s not completely our fault either. Staying on top of software settings when the software is in a constant state of being updated is not an easy task.

It ends up that people who have been concerned about it, as I am, may have a reason though it’s being denied:

...In a Nov. 28 blog post (which also has been deleted), CMG Local Solutions said its “Active Listening” technology can pick up conversations to provide local advertisers a weekly list of consumers who are in the market for a given product or service. Example it cited of what Active Listening can detect included “Do we need a bigger vehicle?”; “I feel like my lawyer is screwing me”; and “It’s time for us to get serious about buying a house.”

There’s a big question as to why someone would even make that claim in the first place without it being true. Maybe it was a drunk intern. Maybe it was an upset employee leaving with a ‘fish in the ceiling’1.

I could put on a tinfoil hat and say that the NSA probably has backdoors on every operating system made in the United States. It’s credible after 9/11, but when I write ‘after 9/11’ I realize there’s an entire generation who doesn’t remember how things were before. Before that, we were less concerned about who was listening in on us because the need to listen to everyone was much less. The word ‘terrorism’ had many different definitions in government then and almost none of them seemed to agree. It was a troublesome time for technology.

We have generations that are blindly trusting these technologies at this point because they’ve been raised on them much as I was raised on Sesame Street. Sesame Street, though, was not too interested in my shopping habits or trying to prep me for a market to buy a certain line of hardware, software, or subscription services. When you think about it, GenX was being sold on the idea of learning stuff whereas subsequent generations have been increasingly marketed to under the guise of education.

All of this should be something that is at least on our radars, something we understand as a possibility.

If the government is doing it, we can’t really depend on them to get companies not to – and we don’t know who is doing it at all.

It takes one story – a cough around an Echo – to make it feel real, if you’re paying attention.

  1. At one company I worked for, someone who had quit had left a few fish in the ceiling tiles in a cube farm. It took months for people to find out where the smell was coming from. ↩︎

Beyond The Bubble.

Cory Doctorow has said that AI is a bubble, which in some ways makes sense. After all, what is being marketed as artificial intelligence is pretty much a matter of statistics trained to give users what they want based on what they have wanted previously, collectively.

That, at least to me, isn’t really artificial intelligence as much as it’s math as smoke and mirrors giving the illusion of intelligence. That’s an opinion, of course, but when something you expect to give you what you want always gives you what you want, I’m not sure there is intelligence involved. It sounds more like subservience.

In fact, as a global society, we should probably be asking more of what we expect from artificial intelligences rather than having a handful of people dictate what comes next. Unfortunately, that’s not the way things seem to work with our global society.

The reality is, as Joe McKendrick pointed out, is that AI as marketed now will simply fade into the background, becoming invisible, which it already has. New problems arise from that, particularly around accountability.

I expanded on that in “The Invisible Future“.

Cory Doctorow is pretty much on the money despite being mocked in some places. It’s a marketing bubble about what has been marketed as artificial intelligence. What we have are useful tools at this point that can make some jobs obsolete, which says more about the jobs than anything else. If, for example, you think that a large language model can replace a human’s ability to communicate with other humans, you could be right to an extent – but virtual is not actual.

What will be left next year of all that has been marketed? The stuff behind the scenes, fading into the background, and which almost never is profitable itself.

Yet, where Cory Doctorow is a bit wrong is that now imaginations have been harnessed toward artificial intelligence, and maybe we will actually produce an intelligence that is an actual intelligence. Maybe, like little spores, they will help us expand our knowledge beyond ourselves, fitting them with sensors so that they can experience the world themselves rather than giving them regurgitated human knowledge.

After all, we create humans much more cheaply than we do artificial intelligences.

I think that might be a better thing to achieve, but… that’s just an opinion.

The Invisible Future.

Joe McKendrick, senior contributor at Forbes.com, predicts that artificial intelligence will fade into the background.

It sort of already has, as even he points out in his article.

That, you see, is the trouble. We don’t know the training models for these artificial intelligences, we don’t know what biases are inherent in it, and we’re at the mercy of whoever is responsible for these artificial intelligences. We’re hoping that they’re thoughtful and considerate and not more concerned with money than people.

That really hasn’t worked out so well for us in the past. Yet the present is here in all it’s glory, unrepentant. It’s happening more obviously now with the news since next year we get artificial news anchors. It’s being used to fight misinformation on social media platforms like Facebook without even explaining to Facebook users why posts are removed and what they contained that was worth removing them for. It’s here and has been here for a while.

Weirder still is the fact that even Facebook’s algorithms aren’t catching deepfake videos with consequences in Bangladesh.

Pandora’s box has been opened, and the world will never quite be the same again. Archimedes once talked about having a lever long enough.

Nowadays it’s just a matter of a choice of fulcrum.

Democracy, based on the idea that informed people can make informed choices in their own interest and the common good, could easily become misDemocracy, where the misinformed make misinformed choices that they think is in their own interests and what they think is the common good.

Perfect Space for Reading and Writing?

Daily writing prompt
You get to build your perfect space for reading and writing. What’s it like?

I’ve tried evolving things over the years, and what I have found is that it’s not where I write that matters. It’s how I feel that matters.

Sometimes it means sitting at the big white dining table in the living room, as I am now, even ignoring the mess off to the right since I’m mid-reorganization.

Sometimes It do it outside on my balcony, with the raw cedar – freshly polished today.

The only place I don’t write is in the bedroom, really. Well, the bathrooms too.

I used to have romantic ideas of writing on the beach. That’s a bad idea. Sand, corrosive stuff all over – I will write in notebooks, but then the sun is never quite right, the wind never quite right, the sand all over… and on every beach I’ve been to in every country, invariably there’s some idiot with a big speaker system in their car who really wants to play me the song of his people.

The things I need for writing are an idea that has congealed. Once I have that, writing is a simple task.

Today I did not have one, so I finally used one of the writing prompts.

The Best Way To Avoid Spreading Misinformation

It’s likely at some point we’ve all spread some misinformation involuntarily. It can have dire consequences, too. Washington Post has an article on misinformation but they forgot the most important thing, I think.

Waiting.

‘Trusted sources’ has been a problem that I’ve been studying since we were working on the Alert Retrieval Cache. In an actual emergency, knowing which information you can trust from the ground and elsewhere is paramount. I remember Andy Carvin asking me how Twitter could be used for the same and I shook my head, explaining the problem that no one seemed to want to listen to: The problem is that an open network presents problems with flawed information getting accepted as truth.

Credentialism is a part of the problem. We expect experts to be all-knowing when in fact being an expert itself has no certification. It requires being right before, all the while we want right now and unfortunately the truth doesn’t work that way.

We see a story on social media and we share it, sometimes without thinking, which is why bad news travels faster than good news.1

The easiest way to avoid spreading misinformation is to do something we’re not very good at in a society that pulses like a tachycardic heart: We wait and see what happens. We pause, and if we must pass something along to our social networks, we say we’re not sure it’s real, but since headlines are usually algorithm generated to catch eyes and to spread them like Covid-19, we have to read the stories and check the facts before we share rather than share off the cuff.

Somewhere along the line, the right now trumped being right, and we see it everywhere. By simply following a story before sharing it, you can stop spreading misinformation and stop the virus of misinformation in it’s tracks. Let the story develop. See where it goes. Don’t jump in immediately to write about it when you don’t actually know much about it.

Check news sources for the stories. Wait for confirmation. If it’s important enough to post, point out that it’s unconfirmed.

It’s that simple.

  1. There’s a pun or two in there. ↩︎

AI and the Media, Misinformation and Narratives.

    Rendition of Walter Cronkite.

News was once trusted more, where the people presenting the news were themselves trusted to give people the facts. There were narratives even then, yet there was a balance because of the integrity of the people involved.

Nowadays, this seems to have changed with institutional distrust, political sectarianism and the battle between partisan and ideological Identities versus anti-establishment orientations.

In short, things are wonky.

Now the world’s first news network entirely generated by artificial intelligence is set to launch next year.1 This seems a bit odd given that the Dictionary.com word of the year is ‘hallucinate’ because of artificial intelligence, as I’ve written about before.

What could possibly go wrong with a news source that is completely powered by artificial intelligence?

Misinformation. Oddly enough, Dr Daniel Williams wrote an interesting article on misinformation, pointing out that misinformation could be a symptom instead of the actual problem. He makes some good points, though it does seem a chicken and egg issue at this point. Which came first? I don’t think anyone can know the answer to that, and if they did, they’d probably not be trusted because things have gotten that bad.

At the same time, I look through my Facebook memories just about every day and note more and more content that I had shared is… gone. Deleted. There’s no reasoning given, and when I do find out that something I shared has been deleted, it’s as informative as a random nun wandering around with a ruler, rapping people’s knuckles and not telling them why she’s doing it.

Algorithms. I don’t know that it’s censorship, but they sure do weed a lot of content and that makes me wonder how much content gets weeded elsewhere. I’m not particularly terrible with my Facebook account or any other account. Like everyone else, I have shared things that I thought to be true that ended up not being true, but I don’t do that very often because I’m skeptical.

We would like to believe integrity is inherent in journalism, but the water got muddied somewhere along the way when news narratives and editorials became more viewed than the actual facts. With the facts, it’s easy to build one’s own narrative though not easy enough when people are too busy making a living to do so. Further, we have a tendency toward viewing that which fits our own world view, the ‘echo chambers’ that pop up now and then such as echoed extremism. To have time to expand beyond our echo chambers, we need to find time to do so and be willing to have our own world views challenged.

Instead, most people are off chasing the red dots, mistaking sometimes being busy as being productive. At a cellular level, we’re all very busy, but that doesn’t mean we’re productive, that we’re adding value to the world around us somehow. There is something to Dr. Daniel Williams’ points on societal malaise.

A news network run completely by artificial intelligence mixed with the world as we have it now doesn’t seem ideal, yet the idea has it’s selling points because media itself isn’t trusted largely because media is built around business, and business is built around advertising, advertising in turn is a game of numbers and to get the numbers you have to get eyeballs looking at the content. Thus, propping up people’s world views is more important when the costs of doing all of that are higher. Is it possible that decreasing the costs would decrease the need to prop up world views for advertising?

We’ll be finding out.

  1. 2024 ↩︎

Surprise: Virtual Isn’t Actual.

Anyone who has had a passing relationship with a dictionary may notice some sarcasm in the the title. Virtual, by definition, isn’t actual.

Of course, someone has to go about proving that and that has value. In the semantics of whether an artificial relationship is real or not, since ‘artificial’ itself is by definition made by humans. It’s easy to go down a path of thought where all relationships are artificial since they are made by humans, but that’s not really what we’re talking about at all.

We’re talking about human society, psychology, and the impact of relationships with artificial intelligences.

Early on, [Silicon Valley companies] discovered a good formula to keep people at their screens,” said Turkle. “It was to make users angry and then keep them with their own kind. That’s how you keep people at their screens, because when people are siloed, they can be stirred up into being even angrier at those with whom they disagree. Predictably, this formula undermines the conversational attitudes that nurture democracy, above all, tolerant listening.

“It’s easy to lose listening skills, especially listening to people who don’t share your opinions. Democracy works best if you can talk across differences by slowing down to hear someone else’s point of view. We need these skills to reclaim our communities, our democracies, and our shared common purpose.”

Why virtual isn’t actual, especially when it comes to friends“, Margaret Turkle, Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology in the Program in Science, Technology, and Society, quoted by Liz Mineo, The Harvard Gazette, December 5th 2023.

If that sounds familiar, it’s a recurring theme. Just last week in AI, Ethics and Us, I pointed to what Miguel Ángel Pérez Álvarez had written on the Spanish version of Wired in “IA: implicaciones éticas más allá de una herramienta tecnológica” which was in the same vein.

Turkle, giving a keynote, had more space to connect the dots and so pointed out that the algorithms Silicon Valley companies use are useful for them to keep all of attached to our screens – but I do think that’s a bit unfair since it’s technology companies, and while there’s a concentration in Silicon Valley, companies around the world are leveraging these algorithms all the time. And as more and more people are noting, it has broader impacts than what we do as individuals.

In fact, if you look at social networks like Facebook and whatever Musk decided to call Twitter next, you’ll find people in algorithmic caves, almost unable to tunnel their way out because they’re quite happy in that algorithmic cave. Within that little cave there is an echo chamber.

An actual echo chamber created by virtual means.