In writing about shadows and ghosts, it’s hard not to draw the line to how we process data – the phrase big data gets tossed around a lot in this way.
Data Science allows us to create constructs of data – interpreted and derived, insinuated and insulated, when in fact we know about as much about that data as we do the people in our own lives – typically insufficient to understand them as people, something I alluded to here.
Data only tells us what has happened, it doesn’t tell us what will happen, and it’s completely based on the availability we frame in and from data. We can create shadows from that data, but the real value of data is in the ghosts – the collected data in contexts beyond our frames and availability.
This is the implicit flaw in machine learning and even some types of AI. It’s where ethics intersects technology when the technologies have the capacity to affect human lives for better and worse, because it becomes a problem of whether it’s fair.
And we really aren’t very good at ‘fair’.
Companies want to make more money they dont care about ethics.
Maybe that’s part of the point of the article. In fact, realizing I wrote it, I must tell you that’s part of the point.
I could work harder at spelling things out, but instead I expect people to form an opinion of what could be, perhaps what should be.
I leave stating the obvious problems to other people. Thank you for that.