Yesterday, I went on a bit of a spree on RealityFragments.com, with the results fairly summarized on the RealityFragments About Page. The reason for the spree was pretty simple.
There are some issues with design.
Some of it is implicit in WordPress.com. To ‘like’ or ‘comment’ on content, you require a WordPress.com account. It’s painful for non-WordPress.com users to do that when they’re used to logging into everything automagically – and it’s also necessary to avoid spam comments that link to websites that sell everything from ‘getting rich quick’ schemes to promises of increasing the prominence of one’s nether regions. It’s a hard balance.
And it’s kinda crappy design because we, collectively, haven’t figured out a better way to handle spammers. I could get into the failures of nations to work together on this, but if we go down that path we will be in the weeds for a very, very long time.
Suffice to say my concern is that of the readers. The users. And it brought to mind that yellow book by Donald A. Norman, the very color of the book being an example of good design. After all, that’s how I remember it.
“Design is really an act of communication, which means having a deep understanding of the person with whom the designer is communicating.”
This is where we who have spent time in the code caves get things wrong. Software Engineers are generally rational beings who expect everyone to be rational, and if we just got rid of irrational users “we would have a lot less problems!”.
I’ve spent about half a century on the planet at this point, and I will make a statement: By default, humans are irrational, and even those of us who consider ourselves rational are irrational in ways we… rationalize. Sooner or later, everyone comes to terms with this or dies very, very frustrated.
The problem I had is that I wasn’t getting feedback. The users can’t give it without giving WordPress.com the emotional equivalent of their first born child, apparently. Things have gotten faster and we want things more now-er. We all do. We want that instant gratification.
In the context of leaving a comment, if there are too many bells and whistles associated with doing it, the person forgets what they were going to comment about in the first place.
“The idea that a person is at fault when something goes wrong is deeply entrenched in society. That’s why we blame others and even ourselves… More and more often the blame is attributed to “human error.” The person involved can be fined, punished, or fired. Maybe training procedures are revised… But in my experience, human error usually is a result of poor design: it should be called system error. Humans err continually; it is an intrinsic part of our nature…. Worse, blaming the person without fixing the root, underlying cause does not fix the problem: the same error is likely to be repeated by someone else.”
The thing is – there is no good solution for this. None, whatsoever, mainly because the alternative that was already there had not occurred to the users. It’s posted on Facebook, on the RealityFragments page, where I mix content from here and RealityFragments. The posts can be easily interacted with on Facebook for those who use Facebook. Sure, it doesn’t show on the website, but that doesn’t matter as much to me as the interaction itself?
Factor in that it’s easy for my posts to get buried by Facebook algorithms, it becomes an issue as well.
Thus, I created the RealityFragments Group on Facebook. People join, they can wander into the group and discuss stuff asynchronously, instead of the doom scroll of content people are subjected to. My intention is for my content not to compete for attention in that way, because it simply can’t.
I don’t have images of models trying on ideas. I don’t have loads of kitten pictures, and I’m certainly not getting dressed up and do duck lips to try to convince people to read and interact with what I create. I am also, for the record, not willing to wear a bikini. You’re welcome.
This was less than ideal solution to the problem. Maybe.
Time will tell if I got it right, but many more technically minded people will say, “You could just manage your own content management system on a rented server.” This is absolutely true.
What’s also true is that I would then be on the hook for everything, and when a content management system needs love, it wants it now. Thus when I’m ready to start writing, I suddenly have to deal with administration issues and before you know it, I’ve forgotten what I wanted to write – just like the users that have to create an account on WordPress.com to comment or like. A mirror.
So this is a compromised solution. Maybe. Time will tell.
And if you want to interact with this post and can’t log in to WordPress, feel free to join the RealityFragments.com Facebook group. Despite it’s name, it’s also for KnowProSE.com
Once again when I logged into Facebook I saw someone posting, “If you get a friend request from me I’ve been hacked! It’s not me!”, or something to that effect.
No, you haven’t been hacked. You’ve been mimicked.
Hacking, which has already deviated from it’s original meaning by popular usage, would be the equivalent of breaking and entering and stealing your information.
There’s no direct value to actually getting into someone’s Facebook account unless you have some credit card tied to it, or use Facebook credentials to log in to other sites – which you should not do.
Facebook is not known for being a privacy company. It didn’t start off as one, it hasn’t been one, and it’s increasingly gotten better at privacy for users but users have to be aware on how this all works.
For that fake Facebook account, they likely took some of your publicly available photos, which does count as theft via copyright, but that’s something generally glossed over because it costs more to litigate than it’s worth, which gets into copyright dilution, and me saying I’m not a lawyer and if you really have concerns you should talk to a lawyer who specializes in copyright. Not me.
And.
This is where you should start thinking about what you share, and how you share it. This is not a new problem, but it could be a new problem for you. It’s even caused a new ‘word’ of sorts for parents who share pictures of their pride and joy. It’s sharenting, and parents on Facebook should read the link in this sentence.
Then they use this account to gain legitimacy by getting a few of your real connections as new connections for the fake account. That’s where your real friends report a fake account, because they know it’s not you and you don’t need to put out a message about ‘your account being hacked’.
Now, if you were truly hacked, someone would have access to your account, they would have changed the password, and you wouldn’t be able to post that your account was hacked!
It’s really that simple. So, be careful what you share on Facebook, be careful what and where you drop comments, and keep your hands and feet inside the moving vehicle. If this at any time seems too complicated, maybe you simply shouldn’t use Facebook. It’s one big echo chamber anyway.
Moderation of content has become a bit ridiculous on social media sites of late. Given that this post will show up on Facebook, and the image at top will be shown, it’s quite possible that the Facebook algorithms that have run amok with me over similiar things, clear parody, may further restrict my account. I clearly marked the image as a parody.
Let’s see what happens. I imagine they’ll just toss more restrictions on me, which is why Facebook and I aren’t as close as we once were. Anyone who thinks a tractor pulling the sunk Moskva really happened should probably have their head examined, but this is the issue of such algorithms left unchecked. It quite simply is impossible, implausible, and… yes, funny, because Ukrainian tractors have invariably been the heroes of the conflict, even having been blown up when their owners were simply trying to reap their harvests.
But this is not about that.
This is about understanding how social media moderation works, and doesn’t, and why it does, and doesn’t.
What The Hell Do You Know?
Honestly, not that much. As a user, I’ve steered clear of most problems with social networks simply by knowing it’s not a private place where I can do as I please – and even where I can, I have rules of conduct I live by that are generally compatible with the laws of society.
What I do know is that when I was working on the Alert Retrieval Cache way back when, before Twitter, the problem I saw with this disaster communication software was the potential for bad information. Since I couldn’t work on it by myself because of the infrastructural constraints of Trinidad and Tobago (which still defies them for emergency communications), I started working on the other aspects of it, and the core issue was ‘trusted sources’.
Trusted Sources.
To understand this problem, you go to a mechanic for car problems, you go to a doctor for medical problems, and so on. Your mechanic is a trusted source for your car (you would hope). But what if your mechanic specializes in your car, but your friend has a BMW that spends more time in the shop than on the road? He might be a trusted source.
You don’t see a proctologist when you have a problem with your throat, though maybe some people should. And this is where the General Practitioner comes in to basically give you directions on what specialist you should see. With a flourish of a pen in alien handwriting, you are sent off to a trusted source related to your medical issue – we hope.
In a disaster situation, you have on the ground people you have on the ground. You might be lucky enough to have doctors, nurses, EMTs and people with some experience in dealing with a disaster of whatever variety that’s on the table, and so you have to do the best with what you have. For information, some sources will be better than others. For getting things done, again, it depends a lot on the person on the ground.
So the Alert Retrieval Cache I was working on after it’s instantiation was going to have to deal with these very human issues, and the best way to deal with that is with other humans. We’re kind of good at that, and it’s not something that AI is very good at because AI is built by specialists and beyond job skills, most people are generalists.You don’t have to be a plumber to fix a toilet, and you don’t have to be a doctor to put a bandage on someone. What’s more, people can grow beyond their pasts despite an infatuation in Human Resources with the past.
Nobody hires you to do what you did, they hire you to do what they want to do in the future.
So just in a disaster scenario, trusted sources are fluid. In an open system not confined to disasters, open to all manner of cute animal pictures, wars, protests, and even politicians (the worst of the lot in my opinion), trusted sources is a complete crapshoot. This leads everyone to trust nothing, or some to trust everything.
Generally, if it goes with your cognitive bias, you run with it. We’re all guilty of it to some degree. The phrase, “Trust but verify” is important.
In social media networks, ‘fact checking’ became the greatest thing since giving up one’s citizenship before a public offering. So fact checking happens, and for the most part is good – but, when applied to parody, it fails. Why? Because algorithms don’t have a sense of humor. It’s either a fact, or it’s not. And so when I posted the pictures of Ukrainian tanks towing everything, Facebook had a hissy fit, restricted my account and apparently had a field day going through past things I posted that were also parody. It’s stupid, but that’s their platform and they don’t have to defend themselves to me.
Is it annoying? You bet. Particularly since no one knows how their algorithms work. I sincerely doubt that they do. But this is a part of how they moderate content.
In protest, does it make sense to post even more of the same sort of content? Of course not. That would be shooting one’s self in the foot (as I may be doing now when this posts to Facebook), but if you’ve already lost your feet, how much does that matter?
Social media sites fail when they don’t explain their policies. But it gets worse.
Piling on Users.
One thing I’ve seen on Twitter that has me shaking my head, as I mentioned in the more human side of Advocacy and Social Networks, is the ‘Pile On’, where a group of people can get onto a thread and overload someone’s ability to respond to one of their posts. On most networks there is some ‘slow down’ mechanism to avoid that happening, and I imagine Twitter is no different, but that might be just from one specific account. Get enough accounts doing the same thing to the same person, it can get overwhelming from the technical side, and if it’s coordinated – maybe everyone has the same sort of avatar as an example – well, that’s a problem because it’s basically a Distributed Denial of Service on another user.
Now, this could be about all manner of stuff, but the algorithms involved don’t care about how passionate people might feel about a subject. They could easily see commonalities in the ‘attack’ on a user’s post, and even on the user. A group could easily be identified as doing pile ons, and their complaints could be ‘demoted’ on the platform, essentially making it an eyeroll and, “Ahh.These people again.”
It has nothing to do with the content. Should it? I would think it should, but then I would want them to agree with my perspective because if they didn’t, I would say it’s unfair. As Lessig wrote, Code is Law. So there could well be algorithms watching that. Are there? I have no earthly idea, but it’ something I could see easily implemented.
And for being someone who does it, if this happens? It could well cause problems for the very users trying to advocate a position. Traffic lights can be a real pain.
Not All In The Group Are Saints.
If we assume that everyone in our group can do no wrong, we’re idiots. As groups grow larger, the likelihood of getting something wrong increases. As groups grow larger, there’s increased delineation from other groups, there’s a mob mentality and there’s no apology to be had because there’s no real structure to many of these collective groups. When Howard Rheingold wrote about Smart Mobs, I waited for him to write about “Absolutely Stupid Mobs”, but I imagine that book would not have sold that well.
Members of groups can break terms of service. Now, we assume that the account is looked at individually. What happens if they can be loosely grouped? We have the technology for that. Known associates, etc, etc. You might be going through your Twitter home page and find someone you know being attacked by a mob of angry clowns – it’s always angry clowns, no matter how they dress – and jump in, passionately supporting someone who may have well caused the entire situation.
Meanwhile, Twitter, Facebook, all of them simply don’t have the number of people to handle what must be a very large flaming bag of complaints on their doorstep every few microseconds. Overwhelmed, they may just go with what the algorithms say and call it a night so that they can go home before the people in the clown cars create traffic.
We don’t know.
We have Terms of Service for guidelines, but we really don’t know the algorithms these social media sites run to check things out. It has to be at least a hybrid system, if not almost completely automated. I know people on Twitter who are on their third accounts. I just unfollowed one today because I didn’t enjoy the microsecond updates on how much fun they were having jerking the chains of some group that I won’t get into. Why is it their third account? They broke the Terms of Service.
What should you not do on a network? Break the Terms of Service.
But when the terms of service are ambiguous, how much do they really know? What constitutes an ‘offensive’ video? An ‘offensive’ image? An ‘offensive’ word? Dave Chappelle could wax poetic about it, I’m sure, as could Ricky Gervais, but they are comedians – people who show us the humor in an ugly world, when permitted.
Yet, if somehow the group gets known to the platform, and enough members break Terms of Service, could they? Would they? Should they?
We don’t know. And people could be shooting themselves in the foot.
It’s Not Our Platform.
As someone who has developed platforms – not the massive social media platforms we have now, but I’ve done a thing or two here and there – I know that behind the scenes things can get hectic. Bad algorithms happen. Good algorithms can have bad consequences. Bad algorithms can have good consequences. Meanwhile, these larger platforms have stock prices to worry about, shareholders to impress, and if they screw up some things, well, shucks, there’s plenty of people on the platform.
People like to talk about freedom of speech a lot, but that’s not really legitimate when you’re on someone else’s website. They can make it as close as they can, following the rules and laws of many nations or those of a few, but really, underneath it all, their algorithms can cause issues for anyone. They don’t have to explain to you why the picture of your stepmother with her middle finger up was offensive, or why a tractor towing a Russian flag ship needed to be fact checked.
In the end, there’s hopefully a person at the end of the algorithm who could be having a bad day, or could just suck at their job, or could even just not like you because of your picture and name. We. Don’t. Know.
So when dealing with these social networks, bear that in mind.
Just as Facebook is recovering from the privacy concerns related to Cambridge Analytica, including threat of lawsuit from one UK group, is now even more interested in your data.
…Facebook already has smaller agreements with financial institutions, including PayPal and American Express, that allow users to do things such as review transaction receipts on Facebook Messenger. In March, Facebook launched a service that would allow Citibank customers in Singapore to ask a Messenger chatbot for their account balance, their recent transactions and credit card rewards.
It’s a strange world we live in where we trust those that have not been trustworthy in the past. ‘To err is human, to forgive is divine.’
Are you divine? I’m not. I’m sure, though, that connecting the accounts will require buy-in from consumers.
It’s a good article, and it shows how much data people give up freely – who doesn’t have a Gmail account or a Facebook page these days? – but it’s lacking something that most people miss, largely because they’re thinking of their own privacy or lack of it.
I requested my data from the sites – Facebook had 384 megabytes on me, and my Google Data I will get on April 7th since I opted for 50 gigabytes. All this data, though, is limited to what I have done.
It lacks the context. We are all single trees in the forest, and these companies aren’t so much in the habit of studying trees by themselves. They have the data of the forest of trees. That context, those interactions, you can’t really download. The algorithms they have derive data from what we hand over so willingly because it costs us nothing financially.
So, while they can give us our data, and some companies do, they can’t give us someone else’s data – so we only get the data on that single tree, ourselves. We learn only a small amount of what their algorithms have decided about us, and while Facebook has a way to see some of what their algorithms have decided about you, they are not compelled to tell you everything about your digital shadow. Your digital shadow has no rights, yet is used to judge you.
What’s your context? That’s the real question. It’s what they don’t show you, what they have decided about you from your habits, that they don’t truly share. That is, after all, their business.
Know that, be conscious of it… and don’t be an idiot online, no matter how smart you think you are. Everything you do is analyzed by an algorithm.
I used to be heavily involved in social media; some might think I still am when I’ve simply become more efficient and have sculpted my networks. In all, though, I rate myself a curmudgeon – a ‘abad-tempered,difficult,cantankerousperson.’
This is not to say that I am a curmudgeon, but I imagine that there are some people who send me things who now believe I am a curmudgeon. Wishing people happy birthday on social media with a click is silly. A deluge of images of politicians leaves me feeling dirty in ways a shower cannot cure, a stream of people who believe Putin masterminded everything from the Presidential Election in the U.S. to their lost sock makes me roll my eyes, watching building blocks of uninformed opinion become representative of otherwise intelligent people is the intellectual equivalent of being assaulted with gift wrapped feces.
What that means is we should understand that it’s typically not very important, and we should be OK with telling people not to send us crap on WhatsApp, Facebook messenging, Twitter, Instagram, and whatever crackpost (that was a typo but I like it) network that people use as echo chambers to feel good about themselves.
We shouldn’t have to think of ourselves as curmudgeons to do this. We can control what we take in simply by telling people what we don’t want to spend our time on – be it the stale joke networks on whatsapp to the in depth discussion on doomed men’s fashion, from the cute puppy videos to the insane amount of posts about adopting animals, etc. In my spare time, I don’t want that to be what defines me.
No, I’d rather define myself than be molded into an acceptable image of what society likes. We are society.