The Reading Problem.

Reading enlightensWe’ve all encountered it. We post an article on some social network and someone comments without reading the article, or not reading it properly.

As someone who writes, I went through the stages of grief about it. I can apathetically report that I don’t care as much as I used to about it. Many people tend to skim headlines, sharing them without thought, and then blaming the Russians or whoever the headline targets for everything.

As someone who reads, I’m confounded by it. When I read that skim reading is the new reading, some of it began to make sense:

…As work in neurosciences indicates, the acquisition of literacy necessitated a new circuit in our species’ brain more than 6,000 years ago. That circuit evolved from a very simple mechanism for decoding basic information, like the number of goats in one’s herd, to the present, highly elaborated reading brain. My research depicts how the present reading brain enables the development of some of our most important intellectual and affective processes: internalized knowledge, analogical reasoning, and inference; perspective-taking and empathy; critical analysis and the generation of insight. Research surfacing in many parts of the world now cautions that each of these essential “deep reading” processes may be under threat as we move into digital-based modes of reading… — 

The bad news is that anyone who read that didn’t skim it, and therefore doesn’t need to understand it on a personal level. The good news is that there are people thinking about it.

But there are other things, things that also need to be addressed. Some people don’t even skim articles, they skim headlines – and in a rush, for whatever reason, they share it. Before you know it, things with no actual truth to them, or just enough to be shared, inundate the entire web.

Issues, too, of framing with technology come into context.

And what it really boils down to is that, aside from how much we might like to think people who are demonstrably susceptible to all of this are ignorant, as a society we generate a lot of things to read. Publishers understand the need for sticky headlines and ‘cover art’, and are good at it.

People don’t have enough time to deep read things, and they don’t want to be left out of an accelerating world – but are proud of themselves when they can type out the 4 letters, ‘TLDR’.

People who figured all of this out long ago have capitalized on it. Fake News, coupled with Big Data analysis of what people are interested in, allows some impressive amount of sharing of information that should probably be tossed in a pyre of literacy.

So, what to do as a writer? Well, the answer to that is simple: Keep writing.

And, as a global citizen on the Internet? Deep read. Don’t skim. Encourage others to do it.

 

Advertisements

Information Fiefdoms

Social Media Information OverloadYesterday, I found myself standing in Nigel Khan’s bookstore in Southpark, looking at what I consider old books.

I have a habit when I look at books, something I picked up in Trinidad some years ago after the Internet became more than a novelty. I check the date a book was published. It keeps me from buying antiques, though I have also been known to buy books in thrift shops abroad (though I am very picky).

I found myself looking at Tim Wu’s ‘The Master Switch: The Rise and Fall of Information Empires‘. Given some of the stuff I’d been talking about in different circles, it interested me – and Tim Wu I knew from his work with Network Neutrality. I checked the publication date.

November, 2010.
It’s August, 2018.

8 years. 5.33 evolutions of Moore’s Law, which is unfair since it isn’t a technology book – but it’s an indicator. Things change quickly. Information empires rise and fall in less time these days – someone was celebrating integrating something with OneNote in one of the groups I participate in, thinking that he’d finally gotten things on track – when, in fact, it’s just a snapshot more subject to Moore’s Law than anyone cares admit – except for the people who want to sell you more hardware and more software. They’ve evolved to the subscription model to make their financial flow rates more consistent, while you, dear subscriber, don’t actually own anything you subscribe to.

You’re building a house with everything on loan from the hardware store. When your subscription is up, the house disappears.

Information empires indeed. Your information may be your own, but how you get to it is controlled by someone who might not be there tomorrow.

We tend to think of information in very limited ways when we are in fact surrounded by it. We are information. From our DNA to our fingerprints, from our ears to our hair follicles – we are information, information that moves around and interacts with other information. We still haven’t figured out our brains, a depressing fact since it seems a few of us have them, but there we have it.

Information empires. What separates data from information is only really one thing – being used. Data sits there; it’s a scalar. Information is a vector – and really, information has more than one vector. Your mother is only a mother to you – she might be an aunt to someone else, a boss to someone else, an employee to someone else, and a daughter to your grandmother. Information allows context, and there’s more than one context.

If you’re fortunate, you see at least one tree a day. That tree says a lot, and you may not know it. Some trees need a lot of water, some don’t. Some require rich soil, some don’t. Simply by existing, it tells us about the environment it is in. Information surrounds us.

Yet we tend to think of information in the context of libraries, or of database tables. And we tend to look at Information Empires – be they by copyright, by access (Net Neutrality, digital divide, et al), or simply because of incompatible technologies. They come and go, increasingly not entering the public domain, increasingly lost – perhaps sometimes for good.

And if you go outside right now and stand, breathing the air, feeling the wind, watching the foliage shift left and right, you are awash in information that you take for granted – an empire older than we are, information going between plants through fungus.

There are truly no information empires in humanity other than those that are protected by laws. These are fiefdoms, gatekeepers to information.

The information empire – there is only one – surrounds us.

Technology and Mediation

Open source photography -- are you in or out?As I mentioned before, I recently took a level 1 mediation course and in doing that, I began looking at many things through a new lens. It’s a process, and since it’s my life, much of what I’ve looked at relates to technology.

Looking through such a lens, though, reveals a mess.

Nature, Tech and Mediation

When we think of technology these days, we tend to think of the Internet related technologies, technologies that through our lifetime have run through our lives like fire – seemingly unstoppable, without an ability to individually control them and how they impact our lives. This is because fire, like the wheel and other technologies like them, are based on natural laws. There is no control over natural laws, there is only an understanding of them and use of that understanding.

For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.
– Richard P. Feynman, Appendix F of the Rogers Commission Report (on the Challenger disaster).

With Internet technologies, though, it’s not so much about nature because, while the platform is derived from natural laws, what is used on them is defined by human minds. By code, and what that code works on: our content.

The Code

Why does code work the way it does? It’s typically consensus of the group involved with writing it, which varies. The Open Source and Free Software communities have a meritocracy structure, and proprietary approaches tend to a more corporate structure. The ‘object oriented’ approach means code gets re-used, which means that it becomes something plugged into applications it may not originally be designed for – and because it works for the criteria of the project.

Just because something works for the criteria of a project, though, doesn’t mean it’s the best fit – something I’ve seen all too many times. And the criteria of the project are almost never complete; when you set code out in the wild of the world subject to users, their interactions can take projects down paths one never expected.

In this way, code and fire are similar. Software Engineers and companies like to think that they have everything thought out, but we typically miss something as we chase a deadline or the deadline chases us. And this is where that similarity with fire disappears: the code evolves, or the project dies.

In all of this, where does mediation happen? Absolutely nowhere. Any piece of code is a balance of negotiations between what the developers think the consumers want, the timeline, and whatever the company or open source community decides …and nowadays, what the company and the open source community decide.

The end users, the consumers, the majority of people, really don’t have too much of a say in any of this. There is one methodology that forces consumer interaction more than others (DevOps), but it’s only for more finite projects and even then is a negotiation with an opportunity of mediation that I have never seen or heard of happening.

“You get what we write.” – every software company, ever, til they get sold or closed.

The Content

The Internet evolved and continues to evolve because of the complexity of the platform allows it to. While we tend to think we have control over this, it has encircled smaller communities without it, raging like a wildfire. A lot of that has to do with content.

When it comes to content there’s no true mediation, either – my last entry on journalism and social media points to people deciding to mediate – to actively listen, to actively summarize, and to be neutral. Of course, that’s all silly because humans aren’t very good at that. As a society, we’re happier with 30 minutes of silly people screaming at each other over non-issues than we are with a 2 hour documentary on why silly people scream at each other. It boggles the rational mind, but there it is. Our technology has outstripped us in this regard.

A controversial blog post with a catchy title will be shared across social media even if it’s completely wrong. Statistically, the people who share actual scientific research is pretty slim – but the people who share opinions on such things is devastatingly large. There is a happiness people find in this conflict that baffles the calm mind.

So, all this content is out there – generating money, having political importance, allegedly influencing elections (another thing to have an opinion on) – and that drives the underlying technology, both hardware and software.

Hardware, for the most part, simply makes things possible and makes things faster. Software gets more and more bloated as software manufacturers make it easier to write code within their own frameworks – nothing beyond the box is encouraged. Thinking inside the box is where the majority of developers now live, depending on a framework to make a living.

There’s just no mediation here.

And the question arises whether there should be.

 

Of Digital Shadows And Digital Ghosts

Ice, Shadow and StoneIn writing about shadows and ghosts, it’s hard not to draw the line to how we process data – the phrase big data gets tossed around a lot in this way.

Data Science allows us to create constructs of data – interpreted and derived, insinuated and insulated, when in fact we know about as much about that data as we do the people in our own lives – typically insufficient to understand them as people, something I alluded to here.

Data only tells us what has happened, it doesn’t tell us what will happen, and it’s completely based on the availability we frame in and from data. We can create shadows from that data, but the real value of data is in the ghosts – the collected data in contexts beyond our frames and availability.

This is the implicit flaw in machine learning and even some types of AI. It’s where ethics intersects technology when the technologies have the capacity to affect human lives for better and worse, because it becomes a problem of whether it’s fair.

And we really aren’t very good at ‘fair’.

The Networking of Truth And Falsehood: ‘Fake News’

MissionThere is an incessant debate over truth right now, the same as there ever is, branded this time as ‘fake news’.

It has everyone mistrusting everyone, everything – everyone but the least ethically or cognitively competent, willful or not. It’s the elephant on the chest of social media companies, traditional media companies fighting for business relevancy in a networked world, and we, the factually impaired.

In all of this, we focus on the lack of truth. Yet, where we find truth we find precision, and where we find precision, we find error. When we talk about fake news, we’re really talking about the innocuous stories fed to the media – social and traditional – that spread not because they’re good, but because they’re catchy. ‘Sticky’, as marketers would say.

The Basics

Truth itself is a fickle thing. We seek objectivity in our subjective experiences of life, and only when we master these subjectivities do we diminish error and improve the precision. Again, where we experience precision, we experience error – they cannot exist without each other.

There are seconds of truth.
There are minutes of truth.
There are degrees of truth.

It’s all trigonometry to an extent, which fuzzy logic measures by weight, but it’s there – particularly when reconciling two versions of the truth. When we get three versions of the truth, it gets more complicated. When we get 10 versions of the truth, it’s even more exponentially complicated. So we do what humans do – we simplify when we’re overwhelmed. When we’re scared, it might become about race or about people ‘over there’, a wide net that catches innocent and guilty simply to catch the guilty.

Aggregating Truth

All of this used to be more manageable when we had fewer versions of the truth. The Internet came along and gave us the metaphorical 10,000 monkeys typing out their own versions of Shakespeare all over the Internet. Most monkeys simply regurgitate the same stuff they read somewhere else, hoping to make their audience click around their site to get a little bit more advertising revenue. When you drill down, there are actually very few monkeys that come up with the best versions and they’re not the same all the time.

But the monkeys that come up with the most popular versions aren’t necessarily the best – and the best versions are not always popular. Network powered societies amplify this and we’re network powered, so much so we cannot truly conceive versions of truth as easily. Facts have become croutons on a low carb salad – almost extinct, if not extinct.

And it all happens faster. Where we might have gotten news once a day with the printing press, twice a day with the television, thrice with the radio, we have versions of truth on tap 24/7, where the first to cover something gets the prized advertising revenue no matter how uninformative and perhaps wrong the coverage is.

Because we simplify. It’s human nature. We ’round off’. We estimate. We guess. We find comfort in opinions and op-eds that get more clicks with less facts. And those that want to insert stories to spread can get their research done through aggregate data mined from social networks and your local grocery store.

We find in life that when the people around us make better decisions, we ourselves get better choices. We find that when we make better decisions, those around us get better choices.

And we find that the opposite is also true.

Rethink where you get your content. Re-assess your connections in what they share, reassess what you read and if none of it makes you uncomfortable, you’re not reading facts but your own fiction, cherry picked from the 10,000 monkeys including the ones who take joy in feeding nonsense to the masses.

Go find Shakespeare. Don’t trust the monkeys. An if you’re one of the monkeys, my word, at least try to get something in with the filler.

Technology And Arts

Sisyphean TechnologyPeople in technology of my era and later are strange creatures that delve into the depths of understanding the cold and relentless logic of systems that they create and maintain. We see the same in other fields, in Law, in Medicine, Accounting and so many others.

Today, as Lessig wrote, ‘Code is Law‘, and Law wrestles with technology even as technology works to circumvent existing Law. Law, as a freshman student will tell you, is not Ethics – it is an attempt at the codification of Ethics in a society. That distinction is important yet routinely forgotten by many – and that’s where some empowered by technology have an ax to grind. Others are just in it for the money, or for some political agenda.

One of the problems we face, as a global society of screen-watchers, is that we have separate silos of technology and arts – where technology is often used as a platform for the liberal arts.

The Limits of Open Data and Big Data

Open Data Awards 2015A couple of days ago, one of the many political memes rolling around was how many police shootings there were under different presidencies. People were, of course, spewing rhetoric on the number of lethal shootings there were between one administration in the 1980s and one in the present. I’m being obtuse because this is not about politics.

The data presented showed that there were less shootings under one administration than another, but it was just a raw number. It had no basis in the number of police at the time, the number of arrests, or the number of police per capita.

I decided to dig into that.

The U.S. population has gone from roughly 227 million people (circa 1980) in that time to 318.9 million as of 2014. That’s fairly substantial. But how many police were there in the 1980s? A search on how many police officers there were in the 1980s was simply useless.

I went to the Bureau of Justice Statistics page on local police. It ends up that they only did any form of police officer census from 1992 to the present day in 4 year increments, which means that they didn’t have the data from the 1980s. If that’s true – if there was no data collected prior – it would mean that decisions were being made without basic data analysis back then, but it also means that we hit a limit of open data.

And I’d expended too much time on it (shouldn’t that be easy to find?), so I left it at that.

Assuming that the data simply does not exist, it means that the limit of the data is by simply not collecting it. I find it difficult to believe that this is the case, but I’ll assume good. So the open data is limited.

Assuming that the data exists but is simply not available, it means that the open data is limited.

The point here is that open data has limits, either defined by a simple lack of data or a lack of access to the data. It has limits by collection method (where bias can be inserted), by the level of participation, and so forth.

And as far as that meme, I have no opinion. “Insufficient data” – a phrase I’ve used more often than I should be comfortable with in this day and age.

Crisis Informatics

DisasterPeople who have known me over the years know I’ve always had a passion for responding to disasters. I can’t tell you why it is that when most people are running away, I have a tendency to run in – something I did before I became a Navy Corpsman (and learned how to do better because of). Later became a stab on what this is about by first enabling the capture of the data itself by enabling the communication. I even worked a year at a company that does weather warnings and other emergency communication, and was disappointed at how little analysis was being done on the data.

Years later, I now read ‘The Data of Disasters‘. Some folks have been working on some of the things that I had been thinking about and working on as I had time, and they seem to have gotten further. I’m excited about since the Alert Retrieval Cache was necessarily closed and didn’t gain the traction I would have liked – and open systems present issues with:

  • Context: A post may be about something mentioned prior (a.k.a. ibid) but not tagged as such because of size limitations.
  • Legitimacy: Whether a source is trustworthy or not, and how many independent sources are reporting on something.
  • Timeliness: Rebuilding a timeline in a network full of shares/retweets can pose a problem because not everyone credits a source. If you go by brute force to find source date and times, you can pull on threads – but you’re not guaranteed of their legitimacy in unit time.
  • Perspectives: GIS allows for multiple perspectives on the same event in unit time.
  • Reactions: When possible, seeing when something at a site changes when all of the above can change in unit time.

It gets a bit more complicated from there – for example, languages can be difficult particularly with dialects and various mixes of languages (such as patois in the Caribbean, where I got into all of this). There’s also a LOT of data involved (big, quick and dirty data) that needs to be cleaned before any analysis can happen.

This is all why I envisioned it all to be a closed system, but the world believes differently, interjecting pictures of food with actual information of use. Like it or not, there’s data out there.

The expansion of data from a source over unit time, as mentioned in their paper on Crisis Informatics , is not something  I had thought of. I imagine they’re doing great work up there at the Department of Information Science in the College of Media, Communication and Information at CU Boulder.

I’ll be keeping an eye out on what else they publish. Might be fun to toss a beowulf cluster at some data.

The Future Of Technology and Society (May 2016)

FutureIf you’re one of those who likes tl;dr, skip this post and find a tweet to read.

It has been bothering me. There are a bunch of very positive articles out there that do not touch on the problems we face in technology.

What I mean by this is that, since the early 1980s, I have been voraciously reading up on the future and plotting my own course through it as I go through long, dark tea-times of my career. It allows me to land where things are interesting to me, or where I can make a living for a while as I watch things settle into place. I’ve never been 100% accurate, but I have never starved and have done well enough even in 3rd world countries without advanced infrastructure or policy. Over the course of decades, I have adapted and found myself attempting to affect policies that I found limiting – something most people don’t really care about.

Today, we’re in exciting times. We have the buzz phrases of big data, deep learning and artificial intelligence floating around as if they were all something new rather than things that have advanced and have been re-branded to make them more palatable. Where in the 1990s the joke was that, “We have a pill for that!”, these days the joke is, “We have an app for that!”. As someone who has always striven to provide things of use to the world, I shook my head when flatulence apps went to war for millions of dollars.

Social networks erupted where people willingly give up their privacy to get things for ‘free’. A read of Daniel Solove’s 10 year old book, The Digital Person: Technology and Privacy in the Information Age, should have woken people up in 2006, but by then everyone was being trained to read 140 characters at a time and ‘tl;dr’ became a thing. I am pleased you made it this far, gentle reader, please continue.

Big Data

All these networks collect the big data. They have predicted pregnancies from shopping habits and been sued for it (Feb 2012). There’s a pretty good list of 10 issues with Big Data and Privacy – here’s some highlights (emphasis mine):

1. Privacy breaches and embarrassments.
2. Anonymization could become impossible.
3. Data masking could be defeated to reveal personal information.
4. Unethical actions based on interpretations.
5. Big data analytics are not 100% accurate.
6. Discrimination.
7. Few (if any) legal protections exist for the involved individuals.
8. Big data will probably exist forever.
9. Concerns for e-discovery.
10. Making patents and copyrights irrelevant.

Item 4, to me, is the largest one – coupled with 5 and 7, it gets downright ugly. Do you want people to make judgements about you based on interpretations of the data that aren’t 100% accurate, and where you have no legal protections?

Instead, the legal framework is biased towards those that collect the data – entities known as corporations (you may have heard of them) – through a grouping of disparate ideas known as intellectual property. In fact, in at least one country I know of, a database can be copyrighted (Trinidad and Tobago) even though the information in it isn’t new. Attempts are being made by some to make things better, but in the end they become feeble – if not brittle – under a legal system that is undeniably swayed by whoever has the most money.

If it sounds like I’m griping – 10 years ago I would have been. This is just a statement of fact at this point. I did what I could to inform over the years, as did others, but ultimately the choice was not that of a well informed minority but that of a poorly informed majority.

Deep Learning / Artificial Intelligence

Deep learning allows amazing things to be done with data. There is no question of that; I’ve played with it myself and done my own analyses on things I have been working on in my ‘spare time’ (read: I have no life). There’s a lot of hypotheses that can come from big data, but it’s the outliers within the big data that are actually the meat of any hypothesis.

In English, the exceptions create the rules which further define what needs to be looked at. For outliers in the data can mean that another bit of data needs to be added to the mix.

Artificial Intelligence (AI), on the other hand, can incorporate deep learning and big data. While an AI may not be able to write a news article that can fool an editor, I imagine it could fool the reading public. This is particularly true since, because of the income issues related to the Internet, media outlets have gone to pulp opinionated pieces instead of the factual news that used to inform rather than attempt to sway or just get more reads by echoing a demographic’s sentiment. Then it is shared by people of like-minded people on social media. It’s an epic charlie-foxtrot. 

People worry about jobs and careers in all of this with robots and AI, and a lot of white collar folks are thinking it will affect those in the blue collar jobs alone. No, it will not. There is an evolution taking place (some call it a revolution), and better paid white collar jobs are much juicier for saving money for people who care only about their stock price. 5 white collar jobs are already under the gun.

KFC and McDonalds have already begun robotizing. More are coming.

And then let’s discuss Ethics in the implementation of AI – look at what Microsoft did with their Twitter-bot, Tay. We have a large corporation putting an alleged AI (chatbot, whatever you want to call it) into a live environment without a thought to the consequences. Granted, it seemed like a simple evolution of Eliza (click the link to see what that means), but you don’t just let your dog off it’s leash or your AI out in an uncontrolled environment. It’s just not done, particularly in an environment where kids need ‘safe places’ and others need trigger warnings. If they didn’t have an army of lawyers – another issue with technology – they probably would have had their pants shaken severely in Courts across the world. Ahh, but they do have an army of well paid lawyers – which leads us to Intellectual Property.
Space Marines: Into the Future

Copyrights, Patents and Trademarks (and Privacy)

If you haven’t read anything about Copyright by Lawrence Lessig in the past decade, or Privacy by Daniel Solove, you’re akin to an unlicensed, blindfolded teenager joy riding in your Mom’s Corvette ZR1. Sure, things might be fun, but it’s a matter of time unless you’re really, really lucky. You shouldn’t be allowed near a computing device without these prerequisites because you’re uninformed. This is not alarmist. This is your reality.

And anyone writing code without this level of familiarity is driving an 18 wheeler in much the same way.

You need a lawyer just to flush a virtual toilet these days. I exaggerate to make the point – but maybe not. It would depend on who owns the virtual toilet.

You can convert any text into a patent application. Really.

Meanwhile, Patent trolls are finally seen as harming innovation. The key point here is that the entire system is biased toward those with more in the bank – which means that small companies are destroyed while the larger companies, such as Google and Oracle, have larger legal battles that impact more people than even know about it. Even writing software tools has become a legal battle between the behemoths.

‘Fair Use’ – the ability to use things you bought in ways that allow you to keep copies of them – has all but been lost in all of this.

Meanwhile, Wounded Warrior – an alleged veteran’s non-profit – has been suing other non-profits because of use of the phrase, ‘Wounded Warrior’. If you want to take the nice approach, they’re trying to avoid dilution of their trademark… at the cost of veterans themselves, but that doesn’t explain them suing two of their own former employees with PTSD.

And Here I Am, Wondering About The Future.

There are a bunch of very positive articles out there that do not touch on the problems we face in technology. Our technology is presently being held for ransom by legal frameworks that do not fit well; this in turn means our ability to innovate, and by proxy entrepreneurship, are also being held ransom. Meanwhile we have people running around with Stockholm Syndrome waiting for the next iPhone hand built by suicidal workers, or the next application that they can open their private data to (hi, Google, Microsoft!), or…

I can’t predict anything at this point. It used to be much simpler and, by proxy, easily controlled. The questions of whether to do something used to be an ethical question, but now we go to lawyers for ethics (a group that is largely not known for ethics – apologies to those who do). The governments institute policies biased by whoever funds the campaigns of politicians, or gives United States congress people nice things. It affects the entire world, and every few years I think it won’t last – it continues.

Too big to fail.

But out of all of this, I don’t mean to stop trying. I don’t mean to stop innovating, starting new businesses, etc. What I mean is – we have a lot of things to do properly to assure a future that isn’t as dim as I see it now, to assure that the kids who are hooked on realities that someone else created rather than what they imagined. Imagination itself needs to be revisited, cultivated and unleashed against all of this like a cool wind across the desert.

It cannot be done blindly. People need to understand all of this. And if you made it this far – congratulations – I offer that you should, if not share this, share the ideas within it freely rather than simply clicking ‘like’ and hoping for the best.

We cannot change things on our own.

As for myself – just surfing the waves as they come in, but I fully intend to build my house on a distant shore at this point.