If you haven’t left Facebook yet, as I have, you’ve probably noticed a lot of AI spam. I did when I was there and blocked a bunch of it (it was hard to keep up with).
Well, it isn’t just you.
“…What is happening, simply, is that hundreds of AI-generated spam pages are posting dozens of times a day and are being rewarded by Facebook’s recommendation algorithm. Because AI-generated spam works, increasingly outlandish things are going viral and are then being recommended to the people who interact with them. Some of the pages which originally seemed to have no purpose other than to amass a large number of followers have since pivoted to driving traffic to webpages that are uniformly littered with ads and themselves are sometimes AI-generated, or to sites that are selling cheap products or outright scams. Some of the pages have also started buying Facebook ads featuring Jesus or telling people to like the page “If you Respect US Army.”…”
So not only are the algorithms arbitrarily restricting user accounts, as they did mine, but they’re feeding people with spam to an extent that it wasn’t just noticeable to an individual.
Meanwhile, Facebook has been buying GPUs to develop ‘next level’ AI, when in fact their algorithms are about as gullible as their GPU purchases are numerous.
I’ve seen more and more people leaving Facebook because their content just isn’t getting into timelines. It’s an interesting thing to consider the possibilities of. While some of the complaints about the Facebook algorithms are fun to read, it doesn’t really mean too much to write those sort of complaints. It’s not as if Facebook is going to change it’s algorithms over complaints.
As I pointed out to people, people using Facebook aren’t the customers. People using Twitter-X aren’t the customers either. To be a customer, you have to buy something. Who buys things on social networks? Advertisers are one, of course.
That’s something Elon Musk didn’t quite get the memo on. Why would he be this confidence? Hubris? Maybe, that always seems a factor, but it’s probably something more sensible.
There’s something pretty valuable in social networks that people don’t see. It’s the user data, which is strangely what the canceled West World was about. The real value is in being able to predict what people want and influence outcomes, much as the television series showed after the first season.1
Many people seem to think that privacy is only about credit card information and personal details. It also includes choices that allow algorithms to predict choices. Humans are black boxes in this regard, and if you have enough computing power you can go around poking and prodding to see the results.
Artificial intelligences need learning models, and if you own a social network, you not only get to poke and prod – you have the potential to influence. Are your future choices something that fall under privacy? Probably not – but your past choices probably should be because that’s how you get to predicting and influencing future choices.
I never really got into Twitter. Facebook was less interruptive. On the surface, these started off as content management systems that provided a service and had paid advertising to support it, yet now one has to wonder at the value of the user data. Back in 2018, Cambridge Analytics harvested data from 50 million Facebook users. Zuckerberg later apologized, and talked about how 3rd party apps would be limited. To his credit, I think it was handled pretty well.
Still, it also signaled how powerful and useful that data could be and if you own a social network, that would at least give you pause. After all, Cambridge Analytics influenced politics at the least, and that could have also influenced markets. The butterfly effect reins supreme in the age of big data and artificial intelligence.
This is why privacy is important in the age of artificial intelligence learning models, algorithms, and so forth. It can impact the responses one gets from any large language model, which is why there are pretty serious questions regarding privacy, copyright, and other things related to training them. Bias leaks into everything, and popular bias on social networks is simply about the most vocal and repetitive – not about what is actually correct. This is also why canceling as a culture phenomenon can also be so damaging. It’s a nuclear option in the world of information, and oddly, large groups of smart or stupid people can use it with impunity.
This is why we see large language models hedge on some questions presently, because of conflicts within the learning model as well as some well designed algorithms. In that we should be a little grateful.
We should probably lobbying to find out what is in these learning models that artificial intelligences are given in much the same way we used2 to grill people who would represent us collectively. Sure, Elon Musk might be taking a financial hit, but what if it’s a gambit to leverage user data for bigger returns later with his ethics embedded in how he gets his companies to do that?
You don’t have to like or dislike people to question them and how they use this data, but we should all be a bit concerned. Yes, artificial intelligence is pretty cool and interesting, but unleashed without question of the integrity of the information trained on is at the least foolish.
Be careful what you share, what you say, who you interact with and why. Quizzes that require access to your user profile are definitely questionable, as that information and information of people you are connected with quickly get folded into data creating a digital shadow of yourself, part of the larger crowd that can influence the now and the future.
This is not to say it was canceled for this reason. I only recently watched it, and have yet to finish season 3, but it’s very compelling and topical content for the now. Great writing and acting. ↩︎
We don’t seem to be that good at it grilling people these days, perhaps because of all of this and more. ↩︎
News was once trusted more, where the people presenting the news were themselves trusted to give people the facts. There were narratives even then, yet there was a balance because of the integrity of the people involved.
What could possibly go wrong with a news source that is completely powered by artificial intelligence?
Misinformation. Oddly enough, Dr Daniel Williams wrote an interesting article on misinformation, pointing out that misinformation could be a symptom instead of the actual problem. He makes some good points, though it does seem a chicken and egg issue at this point. Which came first? I don’t think anyone can know the answer to that, and if they did, they’d probably not be trusted because things have gotten that bad.
At the same time, I look through my Facebook memories just about every day and note more and more content that I had shared is… gone. Deleted. There’s no reasoning given, and when I do find out that something I shared has been deleted, it’s as informative as a random nun wandering around with a ruler, rapping people’s knuckles and not telling them why she’s doing it.
Algorithms. I don’t know that it’s censorship, but they sure do weed a lot of content and that makes me wonder how much content gets weeded elsewhere. I’m not particularly terrible with my Facebook account or any other account. Like everyone else, I have shared things that I thought to be true that ended up not being true, but I don’t do that very often because I’m skeptical.
We would like to believe integrity is inherent in journalism, but the water got muddied somewhere along the way when news narratives and editorials became more viewed than the actual facts. With the facts, it’s easy to build one’s own narrative though not easy enough when people are too busy making a living to do so. Further, we have a tendency toward viewing that which fits our own world view, the ‘echo chambers’ that pop up now and then such as echoed extremism. To have time to expand beyond our echo chambers, we need to find time to do so and be willing to have our own world views challenged.
Instead, most people are off chasing the red dots, mistaking sometimes being busy as being productive. At a cellular level, we’re all very busy, but that doesn’t mean we’re productive, that we’re adding value to the world around us somehow. There is something to Dr. Daniel Williams’ points on societal malaise.
A news network run completely by artificial intelligence mixed with the world as we have it now doesn’t seem ideal, yet the idea has it’s selling points because media itself isn’t trusted largely because media is built around business, and business is built around advertising, advertising in turn is a game of numbers and to get the numbers you have to get eyeballs looking at the content. Thus, propping up people’s world views is more important when the costs of doing all of that are higher. Is it possible that decreasing the costs would decrease the need to prop up world views for advertising?
It’s not a mistake that I was writing about practical communication earlier this morning, because on the Internet there are different rules if you’re concerned about traffic to your content.
There’s all manner of Search Engine Optimization stuff, from linking to similar content to being linked to from similar content, to using words and phrases commonly searched for… to… well, SEO is not as easy as it once was.
Writing with SEO in mind is not an easy task if one wants to have readable content. Sure, people might end up staring at your content because you’re a wizard at marketing your content through SEO and other means, but it doesn’t mean your content is actually useful. I can’t tell you how many times I’ve tried researching something and falling into what I call ‘ambiguity traps’.
For example, yesterday I was trying to figure out how to set the default volume on a Windows 10 machine when it boots so I don’t have to always turn down the sound. That got me finding things about everything but what I was searching for, and after interrogating a few search engines that gave me results about the drive volume instead of the sound volume, I realized that Microsoft didn’t seem to have the capability I was looking for.
A useful piece of content might have been, “Nope. You’re out of luck. You can’t do that.”. Of course, there’s the outside chance that there’s some secret setting hidden somewhere in the registry that makes it all possible, but I do not feel the need to sacrifice a farm animal and do the hokey pokey.
Generally speaking, on the Internet, it’s not as much about being useful as it is driving traffic to get advertising impressions. A few sites actually care about the content, and those sites aren’t commercial sites unless they’re hidden behind a paywall, which means their content likely doesn’t get indexed by the search engine bots.
And that’s what Web 2.0 gave us from the technological tropism. It doesn’t end there. If you haven’t seen the BewareOfImages.com documentary (2016), just follow the link or click the image above to go see it. It’s 2 hours and 40 minutes long, but worth the watch so go grab a beverage and snacks when you do.
Somewhere during all of this, opinions gained traction over news, and then we got into fake news. If you watch the BewareOfImages documentary, you’ll see that this isn’t all that new either. It seems like a recurring theme.
All of this, quite possibly, makes it into the large language models that are so hyped right now.
What could possibly go wrong? In the broad strokes, that’s what some of us are worried about.
It’s no secret that Google is in the AI “arms race”, as it has been called, and there is some criticism that they’re in too much of a hurry.
“…The [AI] answer is displayed at the top, and on the left are links to sites from which it drew its answer. But this will look very different on the smaller screen of a mobile device. Users will need to scroll down to see those sources, never mind other sites that might be useful to their search.
That should worry both Google’s users and paying customers like advertisers and website publishers. More than 60% of Google searches in the US occur on mobile phones. That means for most people, Google’s AI answer will take up most of the phone screen. Will people keep scrolling around, looking for citations to tap? Probably not…”
This could have a pretty devastating effect on Web 2.0 business models, which evolved around search engine results. That, in turn, could be bad for Google’s business model as it stands, which seems to indicate that their business model will be evolving soon too.
Will they go to a subscription model for users? It would be something that makes sense – if they didn’t have competition. They do. The other shoe on this has to drop. One thing we can expect from Google is that they have thought this through, and as an 800 lb gorilla that admonishes those that don’t follow standards, it will be interesting to see how the industry reacts.
It may change, and people are already advocating that somewhat.
“…Google Search’s biggest strength, in my opinion, was its perfect simplicity. Punch in some words, and the machine gives you everything the internet has to offer on the subject, with every link neatly cataloged and sorted in order of relevance. Sure, most of us will only ever click the first link it presents – god forbid we venture to the dark recesses of the second page of results – but that was enough. It didn’t need to change; it didn’t need this.
There’s an argument to be made that search AI isn’t for simple inquiries. It’s not useful for telling you the time in Tokyo right now, Google can do that fine already. It’s for the niche interrogations: stuff like ‘best restaurant in Shibuya, Tokyo for a vegan and a lactose intolerant person who doesn’t like tofu’. While existing deep-learning models might struggle a bit, we’re not that far off AIs being able to provide concise and accurate answers to queries like that…”
Guyton’s article (linked above in the citation) is well worth the read in it’s entirety. It has pictures and everything.
The bottom line on all of this is that we don’t know what the AI’s are trained on, we don’t know how it’s going to affect business models for online publishers, and we don’t know if it’s actually going to improve the user experience.
It seems like every time I open some social media site, someone’s posting about Russia. About how they allegedly influenced the U.S. Elections, about who in the Trump Administration passed notes to someone in Russia, and so on and so forth.
That’s all I know, that’s all I’m going to know, and realistically, I don’t even need to know that. Wait, what?
Right. I don’t need to know all of that. We live on this rotating sphere filled with people who are separated by lines on maps. These people – human beings, so you know – are only citizens of one country or another by accident of birth and legal policies decided before they were born. Maybe a few snuck through here and there, but that’s how it is.
And these countries used to be separated by oceans or fences or languages or… well, they were more separate than they are now on the Internet. Everyone is influencing everyone’s elections one way or the other by mouthing off on social media, so all we’re really discussing is degree.
But they missed a significant point – a point that no one is talking about because it’s so inconvenient and, probably, because it doesn’t sell advertising.
Ads or no ads, those ads wouldn’t be clicked by anyone who didn’t already have a sentiment or world view that made them believe the ad in the first place.
That sentiment could not have been Russian. It wasn’t from Pluto, either. That sentiment that allowed that advertising to work, if indeed it did, was part of the United States.
Either that, or Russians are running amok in the U.S., holding guns to people’s heads and telling them to click the advertisements.