Artificial Intelligences and Responsibility.

AI-NYC_2017-1218MIT Technology Review has a meandering article, “A.I Can Be Made Legally Responsible for It’s Decisions“. In it’s own way, it tries to chart the territories of trade secrets and corporations, threading a needle that we may actually need to change to adapt to using Artificial Intelligence (AI).

One of the things that surprises me in such writing and conversations is not that it revolves around protecting trade secrets – I’m sorry, if you put your self-changing code out there and are willing to take the risk, I see that as part of it – is that it focuses on the decision process. Almost all bad decisions in code I have encountered have come about because the developers were hidden in a silo behind a process that isolated them… sort of like what happens with an AI, only two-fold.

If the decision process is flawed, the first thing to be looked at is the source data for the decisions – and in an AI, this can be a daunting task as it builds learning algorithms based on… data. And so, you have to delve into whether the data used to build those algorithms was corrupt or complete – the former is an issue we get better at minimizing, the latter cannot be solved if only because we as individuals and more so as a society are terrible at identifying what we don’t know.

So, when it comes to legal responsibility of code on a server, AI or not, who is responsible? The publishing company, of course, though if you look at software licensing over the decades you’ll find that software companies have become pretty good at divesting themselves of responsibility. “If you use our software we are not responsible for anything”, is a good short read that most end user license agreements and software licenses have in there, and by clicking through the OK, you’re basically indemnifying the publisher. That, you see, is the crux of of the problem when we speak of AI and responsibility.

In the legal frameworks, camped Armies of Lawyers wait on retainer for anything to happen so that they can defend their well paying client who by simply pointing at a contract that puts all responsibility on the user. Lawyers can argue that point, but they get paid to and I don’t. I’m sure there are some loopholes. I’m sure that when pushed into a corner by another company with similar or better legal resources, ‘settle’ becomes a word used more frequently.

So, if companies can’t be held responsible for their non-AI code, how can they be held responsible for their AI code?

Free Software and Open Source software advocates such as myself have made these points more often than not in so many ways – but this AI discussion extends into data as well, which pulls the Open Data Initiative into the spotlight as well.

The system is flawed in this regard, so to discuss whether an AI can be responsible for it’s decisions is silly. The AI won’t pay a fine, the AI won’t go to jail (what does ‘life’ mean for an AI, anyway?). Largely, it’s the court of public opinion that guides things – and that narrative is easily changed by PR people who have a side door to the legal department.

So let’s not discuss AI and responsibility. Let’s discuss code, data and responsibility – let’s go back to where the root of the problem exists. I’m not an MIT graduate, but I do understand Garbage In, Garbage Out (GIGO).

Advertisements

Deep Learning, Information Bottlenecks – and Osmosis.

I’ve experimented in the past with deep learning in a few different ways, coming up with my own thoughts on how things work and why they work. It was apparent to me when I stopped that in 2016 that I was missing something, and that I needed some distance between myself and the topic at hand. I gave up those Pine64s, and as it happened, moved away from where I was doing it – more importantly, divorcing me from a Software Engineering world where ‘solutions right now’ always trumped ‘solutions’, the former the harbinger of problems, the latter the Holy Grail of every software engineer who dare dream in a world that, except for the minority, requires lockstep precision within an industry that spends it’s time firefighting because of solutions-right-now.

It’s disenchanting. Being disenchanted allows for little in the way of real solutions, at least for myself.

And today I read, “New Theory Cracks Open The Black Box of Deep Neural Networks“. Of course, deep learning is not that new, and the ‘Information Bottleneck’ thought stems from the original work in 1999, the Information Bottleneck Method. That works perhaps in explaining things on a surface level and on an informational level – but as I read it, I was reminded of secondary school biology: Osmosis. No one has seemed to connect the two when they are so suitably connected, and I’d wager that Osmosis scales better since the information bottlenecks, when themselves in a matrix, pretty much would mimic a tunable osmosis.

That said, I’ve found the major problem with deep learning to be that we define inputs when, quite possibly, we should be more loose in our definitions of what we put in. This aligns better with chaos theory – something that the Wired article seems to dismiss:

…When Schwab and Mehta applied the deep belief net to a model of a magnet at its “critical point,” where the system is fractal, or self-similar at every scale, they found that the network automatically used the renormalization-like procedure to discover the model’s state. It was a stunning indication that, as the biophysicist Ilya Nemenman said at the time, “extracting relevant features in the context of statistical physics and extracting relevant features in the context of deep learning are not just similar words, they are one and the same.”

The only problem is that, in general, the real world isn’t fractal. “The natural world is not ears on ears on ears on ears; it’s eyeballs on faces on people on scenes,” Cranmer said…

Pragmatically, this is what we see when we work on projects – but the problem is not what we see, it’s what we don’t see. It’s the things we don’t intuitively connect ourselves because of our own limitations; with simple deep learning we may get away with what we see, but on a much larger scale, we may be looking at the motion of wings of a butterfly on the other side of the world causing a tipping point that creates a hurricane on the other.

Of course, this is all theory, and hardly some earth shattering change in the way we look at things – but a small change in how we approach things could well be what we need to move forward at various intersections. In this, I am trying to be a simple butterfly flapping his wings.