The Battle For Your Habits.

Found floating around today in the wild. As an atheist that doesn’t use Chrome, I know he ain’t talking to me.

There are some funny memes going around about TikTok and… Chinease spyware, or what have you. The New York Times had a few articles on TikTok last week that were interesting and yet… missed a key point that the memes do not.

Being afraid of Chinese Spyware while so many companies have been spying on their customers seems more of a bias than anything.

Certainly, India got rid of TikTok and has done better for it. Personally, I don’t like giving my information to anyone if I can help it, but these days it can’t be helped. Why is TikTok an issue in the United States?

It’s not too hard to speculate that it’s about lobbyism of American tech companies who lost the magic sauce for this generation. It’s also not hard to consider India’s reasoning about China being able to push their own agenda, particularly with violence on their shared borders.

Yet lobbying from the American tech companies is most likely, because they want your data and don’t want you to give it to China. They want to be able to sell you stuff based on what you’ve viewed, liked, posted, etc. So really, it’s not even about us.

It’s about the data that we give away daily when browsing social networks of any sort, websites, or even when you think you’re being anonymous using Google Chrome when in fact you’re still being tracked. The people who are advocating banning TikTok aren’t holding anyone else’s feet to the fire, instead using the ‘they will do stuff with your information’ when in fact we’ve had a lot of bad stuff happen with our information over the years.

Found circulating as a meme, which lead me to check out StoneToss.com – some really great work there.

Since 9/11, in particular, the US government has taken a pretty big interest in electronic trails, all in the interest in National Security, with the FBI showing up after the Boston Marathon bombing just because people were looking at pressure cookers.

All of this information will get possibly get poured into learning models for artificial intelligences, too. Even WordPress.com volunteered people’s blogs rather than asked for volunteers.

What value do you get for that? They say you get better advertising, which is something that I boggle at. Have you ever heard anyone wish that they could see better advertising rather than less advertising?

They say you get the stuff you didn’t even know you wanted, and to a degree, that might be true, but the ability to just go browse things has become a lost art. Just about everything you see on the flat screen you’re looking at is because of an algorithm deciding for you what you should see. Thank you for visiting, I didn’t do that!

Even that system gets gamed. This past week I got a ‘account restriction’ from Facebook for reasons that were not explained other than instructions to go read the community standards because algorithms are deciding based on behaviors that Facebook can’t seem to explain. Things really took off with that during Covid, where even people I knew were spreading some wrong information because they didn’t know better and, sometimes, willfully didn’t want to know better or understand their own posts in a broader context.

Am I worried about TikTok? Nope. I don’t use it. If you do use TikTok, you should. But you should worry if you use any social network. It’s not as much about who is selling and reselling information about you as much as what they can do with it to control what you see.

Of course, most people on those platforms don’t see them for what they are, instead taking things at face value and not understanding the implications it has on choices they will have in the future that could range from advertising to content that one views.

China’s not our only problem.

Plugins, Extensions, Subscriptions Oh My.

I subscribed to the New York Times over the past week, and it’s been going pretty well. The cost for value for me at the rate they had during the special was there seems good enough.

Yet I had some technical problems. Once was not being able to log in on my laptop while the app on my phone worked perfectly. I use Firefox (because the Mozilla Foundation abandoned what has now become the Seamonkey project which I have used for some time as well.

It didn’t help that I wasn’t sure if I’d used Google or Facebook accounts to do the subscription. I generally don’t because… well, none of these companies need to know everything I do, ya know?

I figured that searching for reasons why this was happening would bring up useful information but it really didn’t. The New York Times site suggested clearing cache, deleting cookies, etc. I did it, and still had the issue.

This should not be this hard. I tried, tried again, deleting cache and cookies every time, and nope. No can do.

Eventually, I got around to turning off all the plugins and extensions. That was the culprit. Apparently the New York Times doesn’t play well with one of the plugins or extensions and I just don’t have the time or patience to figure out which one. It’s ridiculous. It should not be that hard.

The other aspect of this is that I had enough plugins and extensions to make a young hairstylist excited. It had become a house of cards, and many of them I didn’t use anymore.

It’s amazing just how much technocrud one ends up with. I suppose it’s a replacement for all the 8 tracks, cassettes, records and CDs that I had in life.

It’s like the whole app-bloat thing I dodge at every turn. Everyone wants you to use their app, to get updates from their websites, and to let me know when they have done nothing but label something old as new and exciting.

Everyone wants to give me more of what I don’t want and less of what I need: An overall experience as a human being where I’m not nickeled and dimed in time and money. We need a better solution, I think.

I don’t want to subscribe to things. I want what I need, and a few little luxuries. That’s it.

I’m not quite sure what it is, but I’ll have that thought on the backburner and I’m open to ideas.

The Supreme Court, Your Social Network, and AI

One of the ongoing issues that people maybe haven’t paid as much attention to is related to the United States Supreme Court and social networks.

That this has a larger impact than just within the United States takes a little bit of understanding. Still, we’ll start in the United States and what started the ball rolling.

“A majority of the Supreme Court seemed wary on Monday of a bid by two Republican-led states to limit the Biden administration’s interactions with social media companies, with several justices questioning the states’ legal theories and factual assertions.

Most of the justices appeared convinced that government officials should be able to try to persuade private companies, whether news organizations or tech platforms, not to publish information so long as the requests are not backed by coercive threats….”

Supreme Court Wary of States’ Bid to Limit Federal Contact With Social Media Companies“, Adam Liptak, New York Times, March 18, 2024

This deals with the last United States Presidential Election, and we’re in an election year. It also had a lot to do with the response to Covid-19 and a lot of false information that was spread, and even there we see arguments about about whether the government should be the only one spreading false information.

Now I’ll connect this to the rest of the planet. Social networks, aside from the 800lb Chinese Gorilla (TikTok) are mainly in the United States. Facebook. The Social Network formerly known as Twitter. So the servers all fall under US jurisdiction.

Let’s pull that 800 lb Chinese Gorilla back in the ring too, where that political issue of TikTok is at odds with who is collecting data from who, since the Great Firewall of China keeps China in China but lets the data from the world go to their government.

Why is that data important? Because it’s being used to train Artificial Intelligences. It’s about who trains their artificial intelligence’s faster, really.

Knock the dust off this old tune.

Even WordPress.com, where this site is presently hosted, got into the game by volunteering it’s customers before telling them how not to volunteer.

The Supreme Court is supposed to have the last say on all matter of things, and because of that there’s a level of ethics assumed of the members – which John Oliver dragged under a spotlight. Let’s just say: there are questions.

It’s also worth noting that in 2010, the U.S. Supreme Court decided that money was free speech. This means, since technology companies lobby and support politicians, the social networks you use have more free speech than the users combined based on their income alone – not to mention their ability to choose what you see, what you can say, and who you can say it to by algorithms that they can’t seem to master themselves. In a way that’s heartening, in a way it’s sickening.

So, the Supreme Court ruling on issues of whether the United States government’s interference in social networks is also about who collects the data, and what sort of information will be used to train artificial intelligences of the present and future.

The dots are all there, but it seems like people don’t really understand that this isn’t as much a fight for individual freedom of speech as it is about deciding what future generations will be told.

Even more disturbing now is just how much content is AI generated on the Internet, which has already been noted to be a significant amount, and is estimated to be 90% by some experts by 2026.

So who should control what you can post? Should governments decide? Should technology companies?

These days, few trust either. It seems like we need oversight on both, which will never happen on a planet where everybody wants to rule the world. Please fasten your seat-belts.

WordPress.com, Tumblr to Sell Information For AI Training: What You can do.

I accidentally posted this on RealityFragments.com, but I think it’s important enough to leave it there. The audiences vary, but both have other bloggers on them.

While I was figuring out how to be human in 2024, I missed that Tumblr and WordPress posts will reportedly be used for OpenAI and Midjourney training.

This could be a big deal for people who take the trouble to write their own content rather than filling the web with Generative AI text to just spam out posts.

If you’re involved with WordPress.org, it doesn’t apply to you.

WordPress.com has an option to use Tumblr as well, so when you post to WordPress.com it automagically posts to Tumblr. Therefore you might have to visit both of the posts below and adjust your settings if you don’t want your content to be used in training models.

This doesn’t mean that they haven’t already sent information to Midjourney and OpenAI yet. We don’t really know, but from the moment you change your settings…

  • WordPress.com: How to opt out of the AI training is available here.

    It boils down to this part in your blog settings on WordPress.com:


  • With Tumblr.com, you should check out this post. Tumblr is more tricky, and the link text is pretty small around the images – what you need to remember is after you select your blog on the left sidebar, you need to use the ‘Blog Settings’ link on the right sidebar.

Hot Take.

When I was looking into all of this, it ends up that Automattic, the owners of WordPress.com and Tumblr.com is doing the sale.

If you look at your settings, if you haven’t changed them yet, you’ll see that the default was set to allowing the use of content for training models. The average person who uses these sites to post their content are likely unaware, and in my opinion if they wanted to do this the right way the default setting would be to have these settings opt out.

It’s unclear whether they already sent posts. I’m sure that there’s an army of lawyers who will point out that they did post it in places and that the onus was on users to stay informed. It’s rare for me to use the word ‘shitty’ on KnowProSE.com, but I think it’s probably the best way to describe how this happened.

It was shitty of them to set it up like this. See? It works.

Now some people may not care. They may not be paying users, or they just don’t care, and that’s fine. Personal data? Well, let’s hope that got scrubbed.

Some of us do. I don’t know how many, so I can’t say a lot or a few. Yet if Automattic, the parent company of both Tumblr and WordPress.com, will post that they care about user choices, it hardly seems appropriate that the default choice was not to opt out.

As a paying user of WordPress.com, I think it’s shitty to think I would allow the use of what I write, using my own brain, to be used for a training model that the company gets paid for. I don’t see any of that money. To add injury to that insult of my intelligence, Midjourney and ChatGPT also have subscription to offer the trained AI which I also pay for (ChatGPT).

To make matters worse, we sort of have to take the training models on the word of those that use them. They don’t tell us what’s in them or where the content came from.

This is my opinion. It may not suit your needs, and if you don’t have a pleasant day. But if you agree with this, go ahead, make sure your blog is not allowing third party data sharing.

Personally, I’m unsurprised at how poorly this has been handled. Just follow some of the links early on in the post and revel in dismay.

Noam Chomsky, Ludditism and AI.

There’s been a lot of discussion about society and artificial intelligence. This includes the resurgence of mocking shares of Noam Chomsky’s opinion, “The False Promise of ChatGPT” where the only real criticism of it seems to have come from people asking ChatGPT what ChatGPT thought about Chomsky’s criticism.

I suppose human thought would be asking too much when it comes to responding to human thought in the age of generative artificial intelligences, and they kind of make Chomsky’s point.

Anyone who has been paying attention to what presently can be done with artificial intelligence should be concerned at some level. The older one is, the more one has to risk since the skills and experience can easily be flattened by these generative AIs.

So, let’s look at the crux of what Chomsky said:

“…Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge…”

Noam Chomsky, Ian Roberts and Jeffrey Watumull, “The False Promise of ChatGPT“, New York Times, March 8th, 2023.

Given that generative AI do their ‘magic’ by statistics, we see biases based on what their training models consist of. I caught DALL-E misrepresenting ragtime musicians recently. A simplified way of looking at it is that the training model is implicitly biased because art and literature on the Internet is also biased by what is available and what is not, as well as what it consists of. This could be a problem for mankind’s scientific endeavors, though there is space to also say that it may also allow us to connect things that were previously not as easy to connect. This is how I generally use Generative AI.

Debasing our ethics seems to be something we’re pretty good at by ourselves. Consider that to train ChatGPT, there are lawsuits about the use of copyrighted materials. When you consider how much people have put into their work over the years, it is at least questionably ethical to use people’s work without recompense. That’s ethical, and subjective, but the legal side of it has yet to be decided.

A definite issue to consider at this moment in history is how Israel is using AI to select targets and how well that is working out for civilians. It is a hot topic, but regardless of where one stands on the issue it is commonly agreed that avoiding civilian casualties should be done, except the most extreme and thus problematic positions on either side. Is it the AI’s fault when civilian Palestinians die? Where is the line for accountability drawn? We can’t even seem to get that settled without involving AI, and AI is involved.

It’s no secret that law and ethics don’t seem compatible often enough. Of course, generative AI use in Law so far has been problematic.

So is it really an issue where Noam Chomsky is being a Luddite? No, far from, he’s pointing out that we might have issues with a new technology and specifies what sort of issues he might see. In fact, I even asked ChatGPT, which you can see in the footnote. 1

And yet there are Luddites. In fact, I got to know more about Luddites by reading what a Luddite had to say about humanity’s remaining timeline. One even predicts that humanity will end in 2 years.

I’d say Noam Chomsky’s piece will stand up to the test of time, and maybe some of his stuff will be judged outside of the time her wrote it – the plagiarism aspect seems like it’s tied to that copyright issue.

The Luddites have consistently been wrong, and/or humanity has been consistently lucky.

However you look at it, a bit of skepticism is a good thing. Noam Chomsky gave us that. It’s worth understanding it for what it is. Implicitly, it’s partly calling for us to be better humans without a religious guise.

That gets some agreement from me.

  1. ↩︎

AI Reviewing Body Cam Footage, and AIs talking to themselves.

There’s been a lot posted about artificial intelligence since I last wrote about it, but some of it was just hype and marketing whereas the really cool stuff tends to sit well. There’s two main topics that I’ll get out of the way with this post – more verbose topics coming this week.

Talking To Myself…

There’s been some thought about the ‘inner monologue’ that some of us have. Not all of us do have that inner monologue, and we don’t have a reason why yet, but apparently people who do have an inner monologue think that artificial intelligences can benefit from it.

They are finding ways that an inner monologue is beneficial for artificial intelligences, which may oddly help us understand our own inner monologues and lack of it.

If you want to read a bit more deeply into it, “Thought Cloning: Learning to Think while Acting by Imitating Human Thinking” is an interesting paper.

Having spoken to myself now and then over the years, I’m not sure it’s as productive as some think, but I’m not an expert and only have my own experience to base that off of. I do know from my own experience that it’s very easy to reinforce biases that way.

I do some thinking with language, but mainly my thinking is what I would best describe as ‘visually kinetic’, so I am pretty interested in this.

Reviewing Body Cams

One of the problems with any sort of camera system is reviewing it. It takes a long time to review footage, and an experienced eye to do it.

Police departments are turning to artificial intelligence to help with this. Given there is already real time facial recognition, on the surface this seems like a good use of it. However, there are problems with it as there are realistic concerns for communities of color, as well as related to data privacy. A running body cam collects every interaction, sure, but it also collects information on everybody involved in these interactions as well as the person accidentally getting into the frame.

With everything increasingly connected, watching the watchmen through body cams means watching the watchers of the body cam footage.

I wonder what their inner monologue will be like while reviewing hours and hours of boring footage.