From Inputs to The Big Picture: An AI Roundup

This started off as a baseline post regarding generative artificial intelligence and it’s aspects and grew fairly long because even as I was writing it, information was coming out. It’s my intention to do a ’roundup’ like this highlighting different focuses as needed. Every bit of it is connected, but in social media postings things tend to be written of in silos. I’m attempting to integrate since the larger implications are hidden in these details, and will try to stay on top of it as things progress.

It’s long enough where it could have been several posts, but I wanted it all together at least once.

No AI was used in the writing, though some images have been generated by AI.

The two versions of artificial intelligence on the table right now – the marketed and the reality – have various problems that make it seem like we’re wrestling a mating orgy of cephalopods.

The marketing aspect is a constant distraction, feeding us what helps with stock prices and good will toward those implementing the generative AIs, while the real aspect of these generative AIs is not really being addressed in a cohesive way.

To simplify this, this post breaks it down into the Input, the Output, and the impacts on the ecosystem the generative AIs work in.

The Input.

There’s a lot that goes into these systems other than money and water. There’s the information used for the learning models, the hardware needed, and the algorithms used.

The Training Data.

The focus so far has been on what goes into their training data, and that has been an issue including lawsuits, and less obviously, trust of the involved companies.

…The race to lead A.I. has become a desperate hunt for the digital data needed to advance the technology. To obtain that data, tech companies including OpenAI, Google and Meta have cut corners, ignored corporate policies and debated bending the law, according to an examination by The New York Times…

How Tech Giants Cut Corners to Harvest Data for A.I.“, Cade MetzCecilia KangSheera FrenkelStuart A. Thompson and Nico Grant, New York Times, April 6, 2024 1

Of note, too, is that Google has been indexing AI generated books, which is what is called ‘synthetic data’ and has been warned against, but is something that companies are planning for or even doing already, consciously and unconsciously.

Where some of these actions are questionably legal, they’re not as questionably ethical to some, thus the revolt mentioned last year against AI companies using content without permission. It’s of questionable effect because no one seems to have insight into what the training data consists of, and there seems no one is auditing them.

There’s a need for that audit, if only to allow for trust.

…Industry and audit leaders must break from the pack and embrace the emerging skills needed for AI oversight. Those that fail to address AI’s cascading advancements, flaws, and complexities of design will likely find their organizations facing legal, regulatory, and investor scrutiny for a failure to anticipate and address advanced data-driven controls and guidelines.

Auditing AI: The emerging battlefield of transparency and assessment“, Mark Dangelo, Thomson Reuters, 25 Oct 2023.

While everyone is hunting down data, no one seems to be seriously working on oversight and audits, at least in a public way, though the United States is pushing for global regulations on artificial intelligence at the UN. The status of that hasn’t seemed to have been updated, even as artificial intelligence is being used to select targets in at least 2 wars right now (Ukraine and Gaza).

There’s an imbalance here that needs to be addressed. It would be sensible to have external auditing of learning data models and the sources, as well as the algorithms involved – and just get get a little ahead, also for the output. Of course, these sorts of things should be done with trading on stock markets as well, though that doesn’t seem to have made as much headway in all the time that has been happening either.

Some websites are trying to block AI crawlers, and it is an ongoing process. Blocking them requires knowing who they are and doesn’t guarantee bad actors might not stop by.

There is a new Bill that being pressed in the United States, the Generative AI Copyright Disclosure Act, that is worth keeping an eye on:

“…The California Democratic congressman Adam Schiff introduced the bill, the Generative AI Copyright Disclosure Act, which would require that AI companies submit any copyrighted works in their training datasets to the Register of Copyrights before releasing new generative AI systems, which create text, images, music or video in response to users’ prompts. The bill would need companies to file such documents at least 30 days before publicly debuting their AI tools, or face a financial penalty. Such datasets encompass billions of lines of text and images or millions of hours of music and movies…”

New bill would force AI companies to reveal use of copyrighted art“, Nick Robins-Early, TheGuardian.com, April 9th, 2024.

Given how much information is used by these companies already from Web 2.0 forward, through social media websites such as Facebook and Instagram (Meta), Twitter, and even search engines and advertising tracking, it’s pretty obvious that this would be in the training data as well.

The Algorithms.

The algorithms for generative AI are pretty much trade secrets at this point, but one has to wonder at why so much data is needed to feed the training models when better algorithms could require less. Consider a well read person could answer some questions, even as a layperson, with less of a carbon footprint. We have no insight into the algorithms either, which makes it seem as though these companies are simply throwing more hardware and data at the problem than being more efficient with the data and hardware that they already took.

There’s not much news about that, and it’s unlikely that we’ll see any. It does seem like fuzzy logic is playing a role, but it’s difficult to say to what extent, and given the nature of fuzzy logic, it’s hard to say whether it’s implementation is as good as it should be.

The Hardware

Generative AI has brought about an AI chip race between Microsoft, Meta, Google, and Nvidia, which definitely leaves smaller companies that can’t afford to compete in that arena at a disadvantage so great that it could be seen as impossible, at least at present.

The future holds quantum computing, which could make all of the present efforts obsolete, but no one seems interested in waiting around for that to happen. Instead, it’s full speed ahead with NVIDIA presently dominating the market for hardware for these AI companies.

The Output.

One of the larger topics that has seemed to have faded is regarding what was called by some as ‘hallucinations’ by generative AI. Strategic deception was also something that was very prominent for a short period.

There is criticism that the algorithms are making the spread of false information faster, and the US Department of Justice is stepping up efforts to go after the misuse of generative AI. This is dangerous ground, since algorithms are being sent out to hunt products of other algorithms, and the crossfire between doesn’t care too much about civilians.2

The impact on education, as students use generative AI, education itself has been disrupted. It is being portrayed as an overall good, which may simply be an acceptance that it’s not going away. It’s interesting to consider that the AI companies have taken more content than students could possibly get or afford in the educational system, which is something worth exploring.

Given that ChatGPT is presently 82% more persuasive than humans, likely because it has been trained on persuasive works (Input; Training Data), and since most content on the internet is marketing either products, services or ideas, that was predictable. While it’s hard to say how much content being put into training data feeds on our confirmation biases, it’s fair to say that at least some of it is. Then there are the other biases that the training data inherits through omission or selective writing of history.

There are a lot of problems, clearly, and much of it can be traced back to the training data, which even on a good day is as imperfect as our own imperfections, it can magnify, distort, or even be consciously influenced by good or bad actors.

And that’s what leads us to the Big Picture.

The Big Picture

…For the past year, a political fight has been raging around the world, mostly in the shadows, over how — and whether — to control AI. This new digital Great Game is a long way from over. Whoever wins will cement their dominance over Western rules for an era-defining technology. Once these rules are set, they will be almost impossible to rewrite…

Inside the shadowy global battle to tame the world’s most dangerous technology“, Mark Scott, Gian Volpicelli, Mohar Chatterjee, Vincent Manancourt, Clothilde Goujard and Brendan Bordelon, Politico.com, March 26th, 2024

What most people don’t realize is that the ‘game’ includes social media and the information it provides for training models, such as what is happening with TikTok in the United States now. There is a deeper battle, and just perusing content on social networks gives data to those building training models. Even WordPress.com, where this site is presently hosted, is selling data, though there is a way to unvolunteer one’s self.

Even the Fediverse is open to data being pulled for training models.

All of this, combined with the persuasiveness of generative AI that has given psychology pause, has democracies concerned about the influence. A recent example is Grok, Twitter X’s AI for paid subscribers, fell victim to what was clearly satire and caused a panic – which should also have us wondering about how we view intelligence.

…The headline available to Grok subscribers on Monday read, “Sun’s Odd Behavior: Experts Baffled.” And it went on to explain that the sun had been, “behaving unusually, sparking widespread concern and confusion among the general public.”…

Elon Musk’s Grok Creates Bizarre Fake News About the Solar Eclipse Thanks to Jokes on X“, Matt Novak, Gizmodo, 8 April 2024

Of course, some levity is involved in that one whereas Grok posting that Iran had struck Tel Aviv (Israel) with missiles seems dangerous, particularly when posted to the front page of Twitter X. It shows the dangers of fake news with AI, deepening concerns related to social media and AI and should be making us ask the question about why billionaires involved in artificial intelligence wield the influence that they do. How much of that is generated? We have an idea how much it is lobbied for.

Meanwhile, Facebook has been spamming users and has been restricting accounts without demonstrating a cause. If there were a video tape in a Blockbuster on this, it would be titled, “Algorithms Gone Wild!”.

Journalism is also impacted by AI, though real journalists tend to be rigorous in their sources. Real newsrooms have rules, and while we don’t have that much insight into how AI is being used in newsrooms, it stands to reason that if a newsroom is to be a trusted source, they will go out of their way to make sure that they are: They have a vested interest in getting things right. This has not stopped some websites parading as trusted sources disseminating untrustworthy information because, even in Web 2.0 when the world had an opportunity to discuss such things at the World Summit on Information Society, the country with the largest web presence did not participate much, if at all, at a government level.

Then we have the thing that concerns the most people: their lives. Jon Stewart even did a Daily Show on it, which is worth watching, because people are worried about generative AI taking their jobs with good reason. Even as the Davids of AI3 square off for your market-share, layoffs have been happening in tech as they reposition for AI.

Meanwhile, AI is also apparently being used as a cover for some outsourcing:

Your automated cashier isn’t an AI, just someone in India. Amazon made headlines this week for rolling back its “Just Walk Out” checkout system, where customers could simply grab their in-store purchases and leave while a “generative AI” tallied up their receipt. As reported by The Information, however, the system wasn’t as automated as it seemed. Amazon merely relied on Indian workers reviewing store surveillance camera footage to produce an itemized list of purchases. Instead of saving money on cashiers or training better systems, costs escalated and the promise of a fully technical solution was even further away…

Don’t Be Fooled: Much “AI” is Just Outsourcing, Redux“, Janet Vertesi, TechPolicy.com, Apr 4, 2024

Maybe AI is creating jobs in India by proxy. It’s easy to blame problems on AI, too, which is a larger problem because the world often looks for something to blame and having an automated scapegoat certainly muddies the waters.

And the waters of The Big Picture of AI are muddied indeed – perhaps partly by design. After all, those involved are making money, they have now even better tools to influence markets, populations, and you.

In a world that seems to be running a deficit when it comes to trust, the tools we’re creating seem to be increasing rather than decreasing that deficit at an exponential pace.

  1. The full article at the New York Times is worth expending one of your free articles, if you’re not a subscriber. It gets into a lot of specifics, and is really a treasure chest of a snapshot of what companies such as Google, Meta and OpenAI have been up to and have released as plans so far. ↩︎
  2. That’s not just a metaphor, as the Israeli use of Lavender (AI) has been outed recently. ↩︎
  3. Not the Goliaths. David was the one with newer technology: The sling. ↩︎

The Supreme Court, Your Social Network, and AI

One of the ongoing issues that people maybe haven’t paid as much attention to is related to the United States Supreme Court and social networks.

That this has a larger impact than just within the United States takes a little bit of understanding. Still, we’ll start in the United States and what started the ball rolling.

“A majority of the Supreme Court seemed wary on Monday of a bid by two Republican-led states to limit the Biden administration’s interactions with social media companies, with several justices questioning the states’ legal theories and factual assertions.

Most of the justices appeared convinced that government officials should be able to try to persuade private companies, whether news organizations or tech platforms, not to publish information so long as the requests are not backed by coercive threats….”

Supreme Court Wary of States’ Bid to Limit Federal Contact With Social Media Companies“, Adam Liptak, New York Times, March 18, 2024

This deals with the last United States Presidential Election, and we’re in an election year. It also had a lot to do with the response to Covid-19 and a lot of false information that was spread, and even there we see arguments about about whether the government should be the only one spreading false information.

Now I’ll connect this to the rest of the planet. Social networks, aside from the 800lb Chinese Gorilla (TikTok) are mainly in the United States. Facebook. The Social Network formerly known as Twitter. So the servers all fall under US jurisdiction.

Let’s pull that 800 lb Chinese Gorilla back in the ring too, where that political issue of TikTok is at odds with who is collecting data from who, since the Great Firewall of China keeps China in China but lets the data from the world go to their government.

Why is that data important? Because it’s being used to train Artificial Intelligences. It’s about who trains their artificial intelligence’s faster, really.

Knock the dust off this old tune.

Even WordPress.com, where this site is presently hosted, got into the game by volunteering it’s customers before telling them how not to volunteer.

The Supreme Court is supposed to have the last say on all matter of things, and because of that there’s a level of ethics assumed of the members – which John Oliver dragged under a spotlight. Let’s just say: there are questions.

It’s also worth noting that in 2010, the U.S. Supreme Court decided that money was free speech. This means, since technology companies lobby and support politicians, the social networks you use have more free speech than the users combined based on their income alone – not to mention their ability to choose what you see, what you can say, and who you can say it to by algorithms that they can’t seem to master themselves. In a way that’s heartening, in a way it’s sickening.

So, the Supreme Court ruling on issues of whether the United States government’s interference in social networks is also about who collects the data, and what sort of information will be used to train artificial intelligences of the present and future.

The dots are all there, but it seems like people don’t really understand that this isn’t as much a fight for individual freedom of speech as it is about deciding what future generations will be told.

Even more disturbing now is just how much content is AI generated on the Internet, which has already been noted to be a significant amount, and is estimated to be 90% by some experts by 2026.

So who should control what you can post? Should governments decide? Should technology companies?

These days, few trust either. It seems like we need oversight on both, which will never happen on a planet where everybody wants to rule the world. Please fasten your seat-belts.

So. Many. Layoffs.

I’ve been looking at getting back into the ring of software engineering, but it doesn’t seem like a great time to do it.

When Google was laying off workers, I shook my head a bit. It ends up that Google spent 800 million dollars in layoffs just this month. Just this month!

By comparison, Google spent $2.1 billion dollars on layoff expenses for more than 12,000 employees over the course of 2023. Other Google employees only knew about people being dismissed when people’s emails got bounced back last year in February.

With so many layoffs, hopefully they’re getting better at it. Well, maybe not. Google employees have been told more layoffs are coming this year.

I imagine that there are some pretty high quality resumes floating around. As far as the tech field goes, Google is probably considered top tier, and landing a position against someone with Google on their resume is going to be tough.

There’s a problem with that, though. More than 25,000 tech workers from 100 companies got the axe in first few weeks of 2024. Meta, Amazon, Microsoft, Google, TikTok and Salesforce are included in that… and Microsoft numbers may account for the Blizzard/Activision layoffs that happened this past week, sadly.

Blizzard was one of those dream jobs I had as a significantly younger developer way back when. They were often late on delivery for a new game, but it was pretty much worth it. I still play Starcraft II.

It’s become an employer’s job market – maybe it was before, but definitely more so now, and in an era when artificial intelligence may be becoming more attractive for companies and software development, as well as other things. For all we know, they may have consulted artificial intelligence for some of the layoffs, though. It wouldn’t be the first time that happened, though that was in Russia.

I can’t imagine that Google, Microsoft, Meta and Amazon aren’t using big data and AI for this, at least behind the scenes, but it’s probably not being explained because of the blowback that might cause. ‘Fired by AI’ is not something that people would like to see.

When tech companies axe companies, Wall Street rewards them, so stock prices go up – and there are more unemployed technology folk in a period when AI tools are making so many types of productivity easier. Maybe too much easier.

This reminds me so much of the 1990s. The good news is that tech survived the 1990s despite the post-merger layoffs.

Of course, the correction on the NPR article(at the bottom) is something I wish I had caught earlier. “Nearly 25,000 tech workers were laid in the first weeks of 2024. Why is that?would definitely be an article worth reading.

Social Networks, Privacy, Revenue and AI.

I’ve seen more and more people leaving Facebook because their content just isn’t getting into timelines. It’s an interesting thing to consider the possibilities of. While some of the complaints about the Facebook algorithms are fun to read, it doesn’t really mean too much to write those sort of complaints. It’s not as if Facebook is going to change it’s algorithms over complaints.

As I pointed out to people, people using Facebook aren’t the customers. People using Twitter-X aren’t the customers either. To be a customer, you have to buy something. Who buys things on social networks? Advertisers are one, of course.

That’s something Elon Musk didn’t quite get the memo on. Why would he be this confidence? Hubris? Maybe, that always seems a factor, but it’s probably something more sensible.

Billionaires used to be much better spoken, it seems.

There’s something pretty valuable in social networks that people don’t see. It’s the user data, which is strangely what the canceled West World was about. The real value is in being able to predict what people want and influence outcomes, much as the television series showed after the first season.1

Many people seem to think that privacy is only about credit card information and personal details. It also includes choices that allow algorithms to predict choices. Humans are black boxes in this regard, and if you have enough computing power you can go around poking and prodding to see the results.

Have you noticed that these social networks are linked somehow to AI initiatives? Through Meta, Facebook is linked to AI initiatives of Meta. Musk, chief twit at X, has his fingers in the AI pie too.

Artificial intelligences need learning models, and if you own a social network, you not only get to poke and prod – you have the potential to influence. Are your future choices something that fall under privacy? Probably not – but your past choices probably should be because that’s how you get to predicting and influencing future choices.

I never really got into Twitter. Facebook was less interruptive. On the surface, these started off as content management systems that provided a service and had paid advertising to support it, yet now one has to wonder at the value of the user data. Back in 2018, Cambridge Analytics harvested data from 50 million Facebook users. Zuckerberg later apologized, and talked about how 3rd party apps would be limited. To his credit, I think it was handled pretty well.

Still, it also signaled how powerful and useful that data could be and if you own a social network, that would at least give you pause. After all, Cambridge Analytics influenced politics at the least, and that could have also influenced markets. The butterfly effect reins supreme in the age of big data and artificial intelligence.

This is why privacy is important in the age of artificial intelligence learning models, algorithms, and so forth. It can impact the responses one gets from any large language model, which is why there are pretty serious questions regarding privacy, copyright, and other things related to training them. Bias leaks into everything, and popular bias on social networks is simply about the most vocal and repetitive – not about what is actually correct. This is also why canceling as a culture phenomenon can also be so damaging. It’s a nuclear option in the world of information, and oddly, large groups of smart or stupid people can use it with impunity.

This is why we see large language models hedge on some questions presently, because of conflicts within the learning model as well as some well designed algorithms. In that we should be a little grateful.

We should probably lobbying to find out what is in these learning models that artificial intelligences are given in much the same way we used2 to grill people who would represent us collectively. Sure, Elon Musk might be taking a financial hit, but what if it’s a gambit to leverage user data for bigger returns later with his ethics embedded in how he gets his companies to do that?

You don’t have to like or dislike people to question them and how they use this data, but we should all be a bit concerned. Yes, artificial intelligence is pretty cool and interesting, but unleashed without question of the integrity of the information trained on is at the least foolish.

Be careful what you share, what you say, who you interact with and why. Quizzes that require access to your user profile are definitely questionable, as that information and information of people you are connected with quickly get folded into data creating a digital shadow of yourself, part of the larger crowd that can influence the now and the future.

  1. This is not to say it was canceled for this reason. I only recently watched it, and have yet to finish season 3, but it’s very compelling and topical content for the now. Great writing and acting. ↩︎
  2. We don’t seem to be that good at it grilling people these days, perhaps because of all of this and more. ↩︎

Why Social Media Moderation Fails

Ukrainian Military Tractor Pulling Moscow Parody
A clear parody of a Ukrainian tractor pulling the Moscow.

Moderation of content has become a bit ridiculous on social media sites of late. Given that this post will show up on Facebook, and the image at top will be shown, it’s quite possible that the Facebook algorithms that have run amok with me over similiar things, clear parody, may further restrict my account. I clearly marked the image as a parody.

Let’s see what happens. I imagine they’ll just toss more restrictions on me, which is why Facebook and I aren’t as close as we once were. Anyone who thinks a tractor pulling the sunk Moskva really happened should probably have their head examined, but this is the issue of such algorithms left unchecked. It quite simply is impossible, implausible, and… yes, funny, because Ukrainian tractors have invariably been the heroes of the conflict, even having been blown up when their owners were simply trying to reap their harvests.

But this is not about that.

This is about understanding how social media moderation works, and doesn’t, and why it does, and doesn’t.

What The Hell Do You Know?

Honestly, not that much. As a user, I’ve steered clear of most problems with social networks simply by knowing it’s not a private place where I can do as I please – and even where I can, I have rules of conduct I live by that are generally compatible with the laws of society.

What I do know is that when I was working on the Alert Retrieval Cache way back when, before Twitter, the problem I saw with this disaster communication software was the potential for bad information. Since I couldn’t work on it by myself because of the infrastructural constraints of Trinidad and Tobago (which still defies them for emergency communications), I started working on the other aspects of it, and the core issue was ‘trusted sources’.

Trusted Sources.

To understand this problem, you go to a mechanic for car problems, you go to a doctor for medical problems, and so on. Your mechanic is a trusted source for your car (you would hope). But what if your mechanic specializes in your car, but your friend has a BMW that spends more time in the shop than on the road? He might be a trusted source.

You don’t see a proctologist when you have a problem with your throat, though maybe some people should. And this is where the General Practitioner comes in to basically give you directions on what specialist you should see. With a flourish of a pen in alien handwriting, you are sent off to a trusted source related to your medical issue – we hope.

In a disaster situation, you have on the ground people you have on the ground. You might be lucky enough to have doctors, nurses, EMTs and people with some experience in dealing with a disaster of whatever variety that’s on the table, and so you have to do the best with what you have. For information, some sources will be better than others. For getting things done, again, it depends a lot on the person on the ground.

So the Alert Retrieval Cache I was working on after it’s instantiation was going to have to deal with these very human issues, and the best way to deal with that is with other humans. We’re kind of good at that, and it’s not something that AI is very good at because AI is built by specialists and beyond job skills, most people are generalists.You don’t have to be a plumber to fix a toilet, and you don’t have to be a doctor to put a bandage on someone. What’s more, people can grow beyond their pasts despite an infatuation in Human Resources with the past.

Nobody hires you to do what you did, they hire you to do what they want to do in the future.

So just in a disaster scenario, trusted sources are fluid. In an open system not confined to disasters, open to all manner of cute animal pictures, wars, protests, and even politicians (the worst of the lot in my opinion), trusted sources is a complete crapshoot. This leads everyone to trust nothing, or some to trust everything.

Generally, if it goes with your cognitive bias, you run with it. We’re all guilty of it to some degree. The phrase, “Trust but verify” is important.

In social media networks, ‘fact checking’ became the greatest thing since giving up one’s citizenship before a public offering. So fact checking happens, and for the most part is good – but, when applied to parody, it fails. Why? Because algorithms don’t have a sense of humor. It’s either a fact, or it’s not. And so when I posted the pictures of Ukrainian tanks towing everything, Facebook had a hissy fit, restricted my account and apparently had a field day going through past things I posted that were also parody. It’s stupid, but that’s their platform and they don’t have to defend themselves to me.

Is it annoying? You bet. Particularly since no one knows how their algorithms work. I sincerely doubt that they do. But this is a part of how they moderate content.

In protest, does it make sense to post even more of the same sort of content? Of course not. That would be shooting one’s self in the foot (as I may be doing now when this posts to Facebook), but if you’ve already lost your feet, how much does that matter?

Social media sites fail when they don’t explain their policies. But it gets worse.

Piling on Users.

One thing I’ve seen on Twitter that has me shaking my head, as I mentioned in the more human side of Advocacy and Social Networks, is the ‘Pile On’, where a group of people can get onto a thread and overload someone’s ability to respond to one of their posts. On most networks there is some ‘slow down’ mechanism to avoid that happening, and I imagine Twitter is no different, but that might be just from one specific account. Get enough accounts doing the same thing to the same person, it can get overwhelming from the technical side, and if it’s coordinated – maybe everyone has the same sort of avatar as an example – well, that’s a problem because it’s basically a Distributed Denial of Service on another user.

Now, this could be about all manner of stuff, but the algorithms involved don’t care about how passionate people might feel about a subject. They could easily see commonalities in the ‘attack’ on a user’s post, and even on the user. A group could easily be identified as doing pile ons, and their complaints could be ‘demoted’ on the platform, essentially making it an eyeroll and, “Ahh.These people again.”

It has nothing to do with the content. Should it? I would think it should, but then I would want them to agree with my perspective because if they didn’t, I would say it’s unfair. As Lessig wrote, Code is Law. So there could well be algorithms watching that. Are there? I have no earthly idea, but it’ something I could see easily implemented.

And for being someone who does it, if this happens? It could well cause problems for the very users trying to advocate a position. Traffic lights can be a real pain.

Not All In The Group Are Saints.

If we assume that everyone in our group can do no wrong, we’re idiots. As groups grow larger, the likelihood of getting something wrong increases. As groups grow larger, there’s increased delineation from other groups, there’s a mob mentality and there’s no apology to be had because there’s no real structure to many of these collective groups. When Howard Rheingold wrote about Smart Mobs, I waited for him to write about “Absolutely Stupid Mobs”, but I imagine that book would not have sold that well.

Members of groups can break terms of service. Now, we assume that the account is looked at individually. What happens if they can be loosely grouped? We have the technology for that. Known associates, etc, etc. You might be going through your Twitter home page and find someone you know being attacked by a mob of angry clowns – it’s always angry clowns, no matter how they dress – and jump in, passionately supporting someone who may have well caused the entire situation.

Meanwhile, Twitter, Facebook, all of them simply don’t have the number of people to handle what must be a very large flaming bag of complaints on their doorstep every few microseconds. Overwhelmed, they may just go with what the algorithms say and call it a night so that they can go home before the people in the clown cars create traffic.

We don’t know.

We have Terms of Service for guidelines, but we really don’t know the algorithms these social media sites run to check things out. It has to be at least a hybrid system, if not almost completely automated. I know people on Twitter who are on their third accounts. I just unfollowed one today because I didn’t enjoy the microsecond updates on how much fun they were having jerking the chains of some group that I won’t get into. Why is it their third account? They broke the Terms of Service.

What should you not do on a network? Break the Terms of Service.

But when the terms of service are ambiguous, how much do they really know? What constitutes an ‘offensive’ video? An ‘offensive’ image? An ‘offensive’ word? Dave Chappelle could wax poetic about it, I’m sure, as could Ricky Gervais, but they are comedians – people who show us the humor in an ugly world, when permitted.

Yet, if somehow the group gets known to the platform, and enough members break Terms of Service, could they? Would they? Should they?

We don’t know. And people could be shooting themselves in the foot.

It’s Not Our Platform.

As someone who has developed platforms – not the massive social media platforms we have now, but I’ve done a thing or two here and there – I know that behind the scenes things can get hectic. Bad algorithms happen. Good algorithms can have bad consequences. Bad algorithms can have good consequences. Meanwhile, these larger platforms have stock prices to worry about, shareholders to impress, and if they screw up some things, well, shucks, there’s plenty of people on the platform.

People like to talk about freedom of speech a lot, but that’s not really legitimate when you’re on someone else’s website. They can make it as close as they can, following the rules and laws of many nations or those of a few, but really, underneath it all, their algorithms can cause issues for anyone. They don’t have to explain to you why the picture of your stepmother with her middle finger up was offensive, or why a tractor towing a Russian flag ship needed to be fact checked.

In the end, there’s hopefully a person at the end of the algorithm who could be having a bad day, or could just suck at their job, or could even just not like you because of your picture and name. We. Don’t. Know.

So when dealing with these social networks, bear that in mind.