Moderation of content has become a bit ridiculous on social media sites of late. Given that this post will show up on Facebook, and the image at top will be shown, it’s quite possible that the Facebook algorithms that have run amok with me over similiar things, clear parody, may further restrict my account. I clearly marked the image as a parody.
Let’s see what happens. I imagine they’ll just toss more restrictions on me, which is why Facebook and I aren’t as close as we once were. Anyone who thinks a tractor pulling the sunk Moskva really happened should probably have their head examined, but this is the issue of such algorithms left unchecked. It quite simply is impossible, implausible, and… yes, funny, because Ukrainian tractors have invariably been the heroes of the conflict, even having been blown up when their owners were simply trying to reap their harvests.
But this is not about that.
This is about understanding how social media moderation works, and doesn’t, and why it does, and doesn’t.
What The Hell Do You Know?
Honestly, not that much. As a user, I’ve steered clear of most problems with social networks simply by knowing it’s not a private place where I can do as I please – and even where I can, I have rules of conduct I live by that are generally compatible with the laws of society.
What I do know is that when I was working on the Alert Retrieval Cache way back when, before Twitter, the problem I saw with this disaster communication software was the potential for bad information. Since I couldn’t work on it by myself because of the infrastructural constraints of Trinidad and Tobago (which still defies them for emergency communications), I started working on the other aspects of it, and the core issue was ‘trusted sources’.
Trusted Sources.
To understand this problem, you go to a mechanic for car problems, you go to a doctor for medical problems, and so on. Your mechanic is a trusted source for your car (you would hope). But what if your mechanic specializes in your car, but your friend has a BMW that spends more time in the shop than on the road? He might be a trusted source.
You don’t see a proctologist when you have a problem with your throat, though maybe some people should. And this is where the General Practitioner comes in to basically give you directions on what specialist you should see. With a flourish of a pen in alien handwriting, you are sent off to a trusted source related to your medical issue – we hope.
In a disaster situation, you have on the ground people you have on the ground. You might be lucky enough to have doctors, nurses, EMTs and people with some experience in dealing with a disaster of whatever variety that’s on the table, and so you have to do the best with what you have. For information, some sources will be better than others. For getting things done, again, it depends a lot on the person on the ground.
So the Alert Retrieval Cache I was working on after it’s instantiation was going to have to deal with these very human issues, and the best way to deal with that is with other humans. We’re kind of good at that, and it’s not something that AI is very good at because AI is built by specialists and beyond job skills, most people are generalists.You don’t have to be a plumber to fix a toilet, and you don’t have to be a doctor to put a bandage on someone. What’s more, people can grow beyond their pasts despite an infatuation in Human Resources with the past.
Nobody hires you to do what you did, they hire you to do what they want to do in the future.
So just in a disaster scenario, trusted sources are fluid. In an open system not confined to disasters, open to all manner of cute animal pictures, wars, protests, and even politicians (the worst of the lot in my opinion), trusted sources is a complete crapshoot. This leads everyone to trust nothing, or some to trust everything.
Generally, if it goes with your cognitive bias, you run with it. We’re all guilty of it to some degree. The phrase, “Trust but verify” is important.
In social media networks, ‘fact checking’ became the greatest thing since giving up one’s citizenship before a public offering. So fact checking happens, and for the most part is good – but, when applied to parody, it fails. Why? Because algorithms don’t have a sense of humor. It’s either a fact, or it’s not. And so when I posted the pictures of Ukrainian tanks towing everything, Facebook had a hissy fit, restricted my account and apparently had a field day going through past things I posted that were also parody. It’s stupid, but that’s their platform and they don’t have to defend themselves to me.
Is it annoying? You bet. Particularly since no one knows how their algorithms work. I sincerely doubt that they do. But this is a part of how they moderate content.
In protest, does it make sense to post even more of the same sort of content? Of course not. That would be shooting one’s self in the foot (as I may be doing now when this posts to Facebook), but if you’ve already lost your feet, how much does that matter?
Social media sites fail when they don’t explain their policies. But it gets worse.
Piling on Users.
One thing I’ve seen on Twitter that has me shaking my head, as I mentioned in the more human side of Advocacy and Social Networks, is the ‘Pile On’, where a group of people can get onto a thread and overload someone’s ability to respond to one of their posts. On most networks there is some ‘slow down’ mechanism to avoid that happening, and I imagine Twitter is no different, but that might be just from one specific account. Get enough accounts doing the same thing to the same person, it can get overwhelming from the technical side, and if it’s coordinated – maybe everyone has the same sort of avatar as an example – well, that’s a problem because it’s basically a Distributed Denial of Service on another user.
Now, this could be about all manner of stuff, but the algorithms involved don’t care about how passionate people might feel about a subject. They could easily see commonalities in the ‘attack’ on a user’s post, and even on the user. A group could easily be identified as doing pile ons, and their complaints could be ‘demoted’ on the platform, essentially making it an eyeroll and, “Ahh.These people again.”
It has nothing to do with the content. Should it? I would think it should, but then I would want them to agree with my perspective because if they didn’t, I would say it’s unfair. As Lessig wrote, Code is Law. So there could well be algorithms watching that. Are there? I have no earthly idea, but it’ something I could see easily implemented.
And for being someone who does it, if this happens? It could well cause problems for the very users trying to advocate a position. Traffic lights can be a real pain.
Not All In The Group Are Saints.
If we assume that everyone in our group can do no wrong, we’re idiots. As groups grow larger, the likelihood of getting something wrong increases. As groups grow larger, there’s increased delineation from other groups, there’s a mob mentality and there’s no apology to be had because there’s no real structure to many of these collective groups. When Howard Rheingold wrote about Smart Mobs, I waited for him to write about “Absolutely Stupid Mobs”, but I imagine that book would not have sold that well.
Members of groups can break terms of service. Now, we assume that the account is looked at individually. What happens if they can be loosely grouped? We have the technology for that. Known associates, etc, etc. You might be going through your Twitter home page and find someone you know being attacked by a mob of angry clowns – it’s always angry clowns, no matter how they dress – and jump in, passionately supporting someone who may have well caused the entire situation.
Meanwhile, Twitter, Facebook, all of them simply don’t have the number of people to handle what must be a very large flaming bag of complaints on their doorstep every few microseconds. Overwhelmed, they may just go with what the algorithms say and call it a night so that they can go home before the people in the clown cars create traffic.
We don’t know.
We have Terms of Service for guidelines, but we really don’t know the algorithms these social media sites run to check things out. It has to be at least a hybrid system, if not almost completely automated. I know people on Twitter who are on their third accounts. I just unfollowed one today because I didn’t enjoy the microsecond updates on how much fun they were having jerking the chains of some group that I won’t get into. Why is it their third account? They broke the Terms of Service.
What should you not do on a network? Break the Terms of Service.
But when the terms of service are ambiguous, how much do they really know? What constitutes an ‘offensive’ video? An ‘offensive’ image? An ‘offensive’ word? Dave Chappelle could wax poetic about it, I’m sure, as could Ricky Gervais, but they are comedians – people who show us the humor in an ugly world, when permitted.
Yet, if somehow the group gets known to the platform, and enough members break Terms of Service, could they? Would they? Should they?
We don’t know. And people could be shooting themselves in the foot.
It’s Not Our Platform.
As someone who has developed platforms – not the massive social media platforms we have now, but I’ve done a thing or two here and there – I know that behind the scenes things can get hectic. Bad algorithms happen. Good algorithms can have bad consequences. Bad algorithms can have good consequences. Meanwhile, these larger platforms have stock prices to worry about, shareholders to impress, and if they screw up some things, well, shucks, there’s plenty of people on the platform.
People like to talk about freedom of speech a lot, but that’s not really legitimate when you’re on someone else’s website. They can make it as close as they can, following the rules and laws of many nations or those of a few, but really, underneath it all, their algorithms can cause issues for anyone. They don’t have to explain to you why the picture of your stepmother with her middle finger up was offensive, or why a tractor towing a Russian flag ship needed to be fact checked.
In the end, there’s hopefully a person at the end of the algorithm who could be having a bad day, or could just suck at their job, or could even just not like you because of your picture and name. We. Don’t. Know.
So when dealing with these social networks, bear that in mind.
A Curmudgeon’s Guide to Social Media
I used to be heavily involved in social media; some might think I still am when I’ve simply become more efficient and have sculpted my networks. In all, though, I rate myself a curmudgeon – a ‘a bad-tempered, difficult, cantankerous person.’
This is not to say that I am a curmudgeon, but I imagine that there are some people who send me things who now believe I am a curmudgeon. Wishing people happy birthday on social media with a click is silly. A deluge of images of politicians leaves me feeling dirty in ways a shower cannot cure, a stream of people who believe Putin masterminded everything from the Presidential Election in the U.S. to their lost sock makes me roll my eyes, watching building blocks of uninformed opinion become representative of otherwise intelligent people is the intellectual equivalent of being assaulted with gift wrapped feces.
David over at Raptitude figured out that he could have more time to do things with his experiment. Yet even as a curmudgeon, I have to point out that social media, social networks and the humans that use them are a part of our lives – we just don’t need to exist on their plane; they need to exist on ours.
What that means is we should understand that it’s typically not very important, and we should be OK with telling people not to send us crap on WhatsApp, Facebook messenging, Twitter, Instagram, and whatever crackpost (that was a typo but I like it) network that people use as echo chambers to feel good about themselves.
We shouldn’t have to think of ourselves as curmudgeons to do this. We can control what we take in simply by telling people what we don’t want to spend our time on – be it the stale joke networks on whatsapp to the in depth discussion on doomed men’s fashion, from the cute puppy videos to the insane amount of posts about adopting animals, etc. In my spare time, I don’t want that to be what defines me.
No, I’d rather define myself than be molded into an acceptable image of what society likes. We are society.
Language And Tech (2014)
It’s official, for better or worse: ‘Tweet’ is now recognized in the Oxford dictionary despite breaking at least one OED rule: It’s not 10 years old yet.
‘Big Data‘ also made it in, as did ‘crowdsourcing‘, ‘e-reader‘, ‘mouseover‘ and ‘redirect‘ (new context). There’s a better writeup in the June 2013 update of the Oxford English Dictionary (OED) that also dates the use of the phrase, “don’t have a cow, man” back to 1959 – to the chagrin of Bart‘s fans everywhere, I’m sure.
As a sidenote, those that use twitter are discouraged from being twits and ‘sega’ is actually a dance from the Mascarene Islands.
It’s always interesting to watch how language evolves and sometimes it’s a little disturbing. I honestly don’t know how I should feel about ‘tweet’ making it in as the brand ‘twitter’ is based on the word ‘twit’… see above link… but hey. Oxford says it’s ok and twits and tweeters everywhere can now rejoice.
Image courtesy Nancy L. Stockdale and made available through this Creative Commons License.
HowTo: Twitter Header Size 2016
In the past, for Twitter, I just tossed one of my many beach sunrise photos at it and let it do it’s thing. This time, I didn’t.
Yesterday I made the image on the right. There was a meme going around, and I had this picture of a North American Osprey in the forefront of a storm, and I thought – why not.
Then I was looking at my Twitter page and thought, “That might be a good header”. So I tried changing it and, lo, it was too big and didn’t resize right. So – I had the wrong dimensions, and through a search I found out that the Twitter header size is supposed to be 1500 x 500 pixels. I resized the image accordingly.
Same problem.
I tried Chrome and Firefox (I typically use Seamonkey). Same problem.
I did some more digging, tried a few different sizes. Still, no. Same problem. I tried for searching for things like, “Twitter header too big” and came up with the same awful pages that hadn’t helped me in the first place. Some offered to resize it for me, but tada – same problem.
I went from searching for the right answer to hacking my own.
It took me about 15 minutes (as long as it took to write this) to come up with the solution.
If you’re having a similar problem, try changing the canvas size such that you have 100 pixels above and below (add 200 pixels or so to the canvas size, centered, and there you go). Fill it with a similar color just in case.
Try it out. Tweak it if necessary. You’re done.
And a sidenote to Twitter – did you actually think about how screwy this is?