Moderation of content has become a bit ridiculous on social media sites of late. Given that this post will show up on Facebook, and the image at top will be shown, it’s quite possible that the Facebook algorithms that have run amok with me over similiar things, clear parody, may further restrict my account. I clearly marked the image as a parody.
Let’s see what happens. I imagine they’ll just toss more restrictions on me, which is why Facebook and I aren’t as close as we once were. Anyone who thinks a tractor pulling the sunk Moskva really happened should probably have their head examined, but this is the issue of such algorithms left unchecked. It quite simply is impossible, implausible, and… yes, funny, because Ukrainian tractors have invariably been the heroes of the conflict, even having been blown up when their owners were simply trying to reap their harvests.
But this is not about that.
This is about understanding how social media moderation works, and doesn’t, and why it does, and doesn’t.
What The Hell Do You Know?
Honestly, not that much. As a user, I’ve steered clear of most problems with social networks simply by knowing it’s not a private place where I can do as I please – and even where I can, I have rules of conduct I live by that are generally compatible with the laws of society.
What I do know is that when I was working on the Alert Retrieval Cache way back when, before Twitter, the problem I saw with this disaster communication software was the potential for bad information. Since I couldn’t work on it by myself because of the infrastructural constraints of Trinidad and Tobago (which still defies them for emergency communications), I started working on the other aspects of it, and the core issue was ‘trusted sources’.
Trusted Sources.
To understand this problem, you go to a mechanic for car problems, you go to a doctor for medical problems, and so on. Your mechanic is a trusted source for your car (you would hope). But what if your mechanic specializes in your car, but your friend has a BMW that spends more time in the shop than on the road? He might be a trusted source.
You don’t see a proctologist when you have a problem with your throat, though maybe some people should. And this is where the General Practitioner comes in to basically give you directions on what specialist you should see. With a flourish of a pen in alien handwriting, you are sent off to a trusted source related to your medical issue – we hope.
In a disaster situation, you have on the ground people you have on the ground. You might be lucky enough to have doctors, nurses, EMTs and people with some experience in dealing with a disaster of whatever variety that’s on the table, and so you have to do the best with what you have. For information, some sources will be better than others. For getting things done, again, it depends a lot on the person on the ground.
So the Alert Retrieval Cache I was working on after it’s instantiation was going to have to deal with these very human issues, and the best way to deal with that is with other humans. We’re kind of good at that, and it’s not something that AI is very good at because AI is built by specialists and beyond job skills, most people are generalists.You don’t have to be a plumber to fix a toilet, and you don’t have to be a doctor to put a bandage on someone. What’s more, people can grow beyond their pasts despite an infatuation in Human Resources with the past.
Nobody hires you to do what you did, they hire you to do what they want to do in the future.
So just in a disaster scenario, trusted sources are fluid. In an open system not confined to disasters, open to all manner of cute animal pictures, wars, protests, and even politicians (the worst of the lot in my opinion), trusted sources is a complete crapshoot. This leads everyone to trust nothing, or some to trust everything.
Generally, if it goes with your cognitive bias, you run with it. We’re all guilty of it to some degree. The phrase, “Trust but verify” is important.
In social media networks, ‘fact checking’ became the greatest thing since giving up one’s citizenship before a public offering. So fact checking happens, and for the most part is good – but, when applied to parody, it fails. Why? Because algorithms don’t have a sense of humor. It’s either a fact, or it’s not. And so when I posted the pictures of Ukrainian tanks towing everything, Facebook had a hissy fit, restricted my account and apparently had a field day going through past things I posted that were also parody. It’s stupid, but that’s their platform and they don’t have to defend themselves to me.
Is it annoying? You bet. Particularly since no one knows how their algorithms work. I sincerely doubt that they do. But this is a part of how they moderate content.
In protest, does it make sense to post even more of the same sort of content? Of course not. That would be shooting one’s self in the foot (as I may be doing now when this posts to Facebook), but if you’ve already lost your feet, how much does that matter?
Social media sites fail when they don’t explain their policies. But it gets worse.
Piling on Users.
One thing I’ve seen on Twitter that has me shaking my head, as I mentioned in the more human side of Advocacy and Social Networks, is the ‘Pile On’, where a group of people can get onto a thread and overload someone’s ability to respond to one of their posts. On most networks there is some ‘slow down’ mechanism to avoid that happening, and I imagine Twitter is no different, but that might be just from one specific account. Get enough accounts doing the same thing to the same person, it can get overwhelming from the technical side, and if it’s coordinated – maybe everyone has the same sort of avatar as an example – well, that’s a problem because it’s basically a Distributed Denial of Service on another user.
Now, this could be about all manner of stuff, but the algorithms involved don’t care about how passionate people might feel about a subject. They could easily see commonalities in the ‘attack’ on a user’s post, and even on the user. A group could easily be identified as doing pile ons, and their complaints could be ‘demoted’ on the platform, essentially making it an eyeroll and, “Ahh.These people again.”
It has nothing to do with the content. Should it? I would think it should, but then I would want them to agree with my perspective because if they didn’t, I would say it’s unfair. As Lessig wrote, Code is Law. So there could well be algorithms watching that. Are there? I have no earthly idea, but it’ something I could see easily implemented.
And for being someone who does it, if this happens? It could well cause problems for the very users trying to advocate a position. Traffic lights can be a real pain.
Not All In The Group Are Saints.
If we assume that everyone in our group can do no wrong, we’re idiots. As groups grow larger, the likelihood of getting something wrong increases. As groups grow larger, there’s increased delineation from other groups, there’s a mob mentality and there’s no apology to be had because there’s no real structure to many of these collective groups. When Howard Rheingold wrote about Smart Mobs, I waited for him to write about “Absolutely Stupid Mobs”, but I imagine that book would not have sold that well.
Members of groups can break terms of service. Now, we assume that the account is looked at individually. What happens if they can be loosely grouped? We have the technology for that. Known associates, etc, etc. You might be going through your Twitter home page and find someone you know being attacked by a mob of angry clowns – it’s always angry clowns, no matter how they dress – and jump in, passionately supporting someone who may have well caused the entire situation.
Meanwhile, Twitter, Facebook, all of them simply don’t have the number of people to handle what must be a very large flaming bag of complaints on their doorstep every few microseconds. Overwhelmed, they may just go with what the algorithms say and call it a night so that they can go home before the people in the clown cars create traffic.
We don’t know.
We have Terms of Service for guidelines, but we really don’t know the algorithms these social media sites run to check things out. It has to be at least a hybrid system, if not almost completely automated. I know people on Twitter who are on their third accounts. I just unfollowed one today because I didn’t enjoy the microsecond updates on how much fun they were having jerking the chains of some group that I won’t get into. Why is it their third account? They broke the Terms of Service.
What should you not do on a network? Break the Terms of Service.
But when the terms of service are ambiguous, how much do they really know? What constitutes an ‘offensive’ video? An ‘offensive’ image? An ‘offensive’ word? Dave Chappelle could wax poetic about it, I’m sure, as could Ricky Gervais, but they are comedians – people who show us the humor in an ugly world, when permitted.
Yet, if somehow the group gets known to the platform, and enough members break Terms of Service, could they? Would they? Should they?
We don’t know. And people could be shooting themselves in the foot.
It’s Not Our Platform.
As someone who has developed platforms – not the massive social media platforms we have now, but I’ve done a thing or two here and there – I know that behind the scenes things can get hectic. Bad algorithms happen. Good algorithms can have bad consequences. Bad algorithms can have good consequences. Meanwhile, these larger platforms have stock prices to worry about, shareholders to impress, and if they screw up some things, well, shucks, there’s plenty of people on the platform.
People like to talk about freedom of speech a lot, but that’s not really legitimate when you’re on someone else’s website. They can make it as close as they can, following the rules and laws of many nations or those of a few, but really, underneath it all, their algorithms can cause issues for anyone. They don’t have to explain to you why the picture of your stepmother with her middle finger up was offensive, or why a tractor towing a Russian flag ship needed to be fact checked.
In the end, there’s hopefully a person at the end of the algorithm who could be having a bad day, or could just suck at their job, or could even just not like you because of your picture and name. We. Don’t. Know.
So when dealing with these social networks, bear that in mind.
One thought on “Why Social Media Moderation Fails”