The Anki Vector: Let’s Wait For the API.

Vector playing with cube.So, I got an Anki Vector. My reasons for buying one were pretty simple, really – it seemed like a throwback to the 70s when I had a Big Trak, a programmable machine that had me often shooting my mother with a laser and harassing the family dog.

With Big Trak’s Logo-ish programming, there were tangible results even if the ‘fire phaser’ command was really just a flashing light. It was the 1970s,  after all, in an era when Star Wars and Star Trek reigned supreme.

So the idea of the Anki Vector was pretty easy for me to contend with. I’ve been playing with the idea of building and programming a personal robot, and this would allow me to get away from ‘building’.

I hoped.

Out of the Box.

The Anki Vector needed some charging in it’s little home station, and I dutifully installed the application on the phone, following the instructions, connecting it to my Wifi – and while people said that they have had problems with the voice recognition, I have not. Just speak clearly and at an even pace, and Vector seems to handle things well.

The focal length that Vector’s camera(s) are limited to seems to be between 12-24 inches, based on it identifying me. It can identify me, even with glasses, after some training – roughly 30 minutes – as long as my face is withing 12-24 inches from it’s face.

It’s a near-sighted robot, apparently, which had me wondering if that would be something to work with through the API.

It is an expressive robot – it borrows from WALL-E in this regard, it seems. And while it can go to the Internet and impress your friends with it’s ability to use it’s voice to read stuff off of Wikipedia, it’s not actually that smart. In that regard, it’s Wikipedia on tracks with expressive eyes that, yes, you can change the color of.

Really, within the first hour, you run out of tricks with Vector at this time – the marketing team apparently wrote the technical documentation, which is certainly easy to read – largely because it doesn’t actually say much. I’m still trying to figure out why the cube came with it – somewhere, it said it helped Vector navigate outside of it’s ‘home area’ – but navigate and do what?

Explore and do what? Take a picture and see it where? There is a lack of clarity on things in the documentation. While petting Vector has an odd satisfaction to it, it doesn’t quite give me enough.

On December 6th, I tweeted to Anki and asked them about the API – because with the hardware in the Vector, I should be able to do some groovy things and expand it’s functionality.

Crickets for the last 3 days.

Without that API, I think the Vector is limited to the novelty part of the store… which is sad, because I had hopes that it would be a lot more.

Maybe that API will come out before I forget that I have a Vector.

Of Digital Shadows And Digital Ghosts

Ice, Shadow and StoneIn writing about shadows and ghosts, it’s hard not to draw the line to how we process data – the phrase big data gets tossed around a lot in this way.

Data Science allows us to create constructs of data – interpreted and derived, insinuated and insulated, when in fact we know about as much about that data as we do the people in our own lives – typically insufficient to understand them as people, something I alluded to here.

Data only tells us what has happened, it doesn’t tell us what will happen, and it’s completely based on the availability we frame in and from data. We can create shadows from that data, but the real value of data is in the ghosts – the collected data in contexts beyond our frames and availability.

This is the implicit flaw in machine learning and even some types of AI. It’s where ethics intersects technology when the technologies have the capacity to affect human lives for better and worse, because it becomes a problem of whether it’s fair.

And we really aren’t very good at ‘fair’.

Artificial Intelligences and Responsibility.

AI-NYC_2017-1218MIT Technology Review has a meandering article, “A.I Can Be Made Legally Responsible for It’s Decisions“. In it’s own way, it tries to chart the territories of trade secrets and corporations, threading a needle that we may actually need to change to adapt to using Artificial Intelligence (AI).

One of the things that surprises me in such writing and conversations is not that it revolves around protecting trade secrets – I’m sorry, if you put your self-changing code out there and are willing to take the risk, I see that as part of it – is that it focuses on the decision process. Almost all bad decisions in code I have encountered have come about because the developers were hidden in a silo behind a process that isolated them… sort of like what happens with an AI, only two-fold.

If the decision process is flawed, the first thing to be looked at is the source data for the decisions – and in an AI, this can be a daunting task as it builds learning algorithms based on… data. And so, you have to delve into whether the data used to build those algorithms was corrupt or complete – the former is an issue we get better at minimizing, the latter cannot be solved if only because we as individuals and more so as a society are terrible at identifying what we don’t know.

So, when it comes to legal responsibility of code on a server, AI or not, who is responsible? The publishing company, of course, though if you look at software licensing over the decades you’ll find that software companies have become pretty good at divesting themselves of responsibility. “If you use our software we are not responsible for anything”, is a good short read that most end user license agreements and software licenses have in there, and by clicking through the OK, you’re basically indemnifying the publisher. That, you see, is the crux of of the problem when we speak of AI and responsibility.

In the legal frameworks, camped Armies of Lawyers wait on retainer for anything to happen so that they can defend their well paying client who by simply pointing at a contract that puts all responsibility on the user. Lawyers can argue that point, but they get paid to and I don’t. I’m sure there are some loopholes. I’m sure that when pushed into a corner by another company with similar or better legal resources, ‘settle’ becomes a word used more frequently.

So, if companies can’t be held responsible for their non-AI code, how can they be held responsible for their AI code?

Free Software and Open Source software advocates such as myself have made these points more often than not in so many ways – but this AI discussion extends into data as well, which pulls the Open Data Initiative into the spotlight as well.

The system is flawed in this regard, so to discuss whether an AI can be responsible for it’s decisions is silly. The AI won’t pay a fine, the AI won’t go to jail (what does ‘life’ mean for an AI, anyway?). Largely, it’s the court of public opinion that guides things – and that narrative is easily changed by PR people who have a side door to the legal department.

So let’s not discuss AI and responsibility. Let’s discuss code, data and responsibility – let’s go back to where the root of the problem exists. I’m not an MIT graduate, but I do understand Garbage In, Garbage Out (GIGO).

The Study Of What Others Do.

Taran Rampersad
Courtesy Mark Lyndersay, LyndersayDigital

I hate having my picture taken. Over the years, I have found the best defense from cameras is to hold one. This has weakened in a day and age where every phone has a camera, and everyone wants to be seen with someone – but Mark Lyndersay needed a picture of me for TechNewsTT, where the majority of my writing has been published this year outside of my own websites.

In going to his studio, it was a rare glimpse for me into the world of professional photography. It was clear to me almost immediately, this amateur photographer, that it would take me at least a decade to do the editing I watched Mark do quickly, about how he managed his photos, and about why he did the things he did  – a matter of simple experience that cannot be replaced with meetings and requirements discovery.

You see, I had been thinking of writing my own photo management software in Python – something to automate a lot of things. I had briefly considered this when I had begun selling some of my prints in Florida, and it was latent in my mind as a project to ‘get to’. In conversation with Sarita Rampersad, another professional photographer (unrelated), I had asked her what she used last year and why. It was clear that it would take more than a passing effort on my part to build something more useful than the tools she was using. The visit to Mark’s studio underlined this.

The Roots.

Reflecting on this on the way home, I went back to the very core of how I started working with technology. From an early age, I was encouraged – by rote and by whip, as it were – to observe what was being done to understand how it was being done. This was the root of the family business, the now gone Rampersad’s Electrical Engineering, a company that was built on fixing industrial electro-mechanical equipment with clients ranging from the U.S. Navy to someone who just needed their water pump repaired (Even WASA).

This background served me well over the years, and understandably frustrated managers and CEOs. Knowing the context of how things were used allowed for for useful processes and code; it allowed for things to become more efficient and allowed things to be written to last instead of a constant evolution of, “Wouldn’t it be nice if?”. In a world of agile processes, the closest thing to this is the DevOps iteration of Agile which even people who practice Agile haven’t heard of (because they are soundly in the Agile Cave).

DevOps is a form of Agile where every stakeholder is directly involved. And that, to me, is also a problem because of the implicit hierarchies and office (if only office) politics is involved. It’s a bleeding mess of tissue to sew together to form a frankensystem, but at least that frankensystem is closer to what people actually need. Assuming, of course, they understand what they need.

To me, it boils down to studying what other people do.

Observe, Analyze, Communicate, Build.

When I started as an ‘apprentice’ programmer, this was drilled into me by an Uncle who was a Systems Analyst, and ‘allowed’ me to write the code for projects that he was working on. He didn’t boil it down to observe, analyze, communicate and build; I refined that myself over the course of the years.

No matter the process, it all boils down to someone able to bridge how people work/play to get something done to understand what is needed, and how to make their lives easier through automation and information structure. Observing people do their jobs is important, analyzing it secondary, but the most important part is the one thing that an AI cannot yet do: Communicate, the process of listening, speaking (or writing, or…), and then feedback. This process is most important. In priority of importance, software engineering and I believe any form of process or structural engineering is:

  1. Communication
  2. Observation
  3. Analysis
  4. Build

This is not the order in which things are done, of course, but the emphasis that is most important in understanding how present systems work and how future systems should work.

So often over the years, I’ve seen software engineers relegated to the role of code monkeys with emphasis only on ‘Build’, when the most important parts are about ‘building what is needed’. This is where business analysts got introduced somewhere along the way, but they too are put into silos. This is underlined by HR departments focusing only on the ability to ‘build’, where analysts are expected to be a different sort of role. When these roles were split, I cannot say, but to be both is something that is too large and round to fit in small square holes of the modern enterprise.

It is lost, eroded, and there is a part of me that wonders if this is a good thing. Studying what other people do has allowed me to do so many things within and without technology, and it worries me that in a future where AI will be taking over the ‘Build’ that software engineers aren’t being required to focus more on the soft skills that they will need in the coming years.

Sure, the AI can build it – but is ‘it’ what needs to be built?

Humanoid Gets Citizenship: Odd.

Sophia RobotIf you think the world couldn’t get any weirder, it just got ratcheted up. Saudi Arabia, the country where atheists are considered terrorists and  women have less rights than in other countries, has made a humanoid artificial intelligence a citizen – the first humanoid AI citizen in the world.

So, the question is, is Sophia the AI a woman who has less rights than other places in the world, or is she a ‘terrorist’ – or has she had an AI sex change and become Muslim?

The AI Future On Mankind’s Canvas

Doctor Leia.I met her and the young Brazilian woman on the flight from Miami to Orlando, this young Doctor who had an interview in Ocala. She was to drive across to Ocala, to the East, to see if she would get the job. She didn’t look old enough to be a Doctor, but I’ve passed the age threshold where doctors were younger than myself years ago. We talked about medicine and medical administration for a while even as I checked up on the nervous Brazilian high school graduate. I sat, a thorn between two roses, all the while thinking:

What sort of world were they entering? Doc Leia, a graduate from The University of the West Indies, off to Ocala, and the young woman to my right, off to see the sights as a reward for having survived so many years of schooling. They were both easily younger than most of my nieces. The Doctor had already become heavily invested in her future – medical school was a daunting path and might have been one I would have pursued with the right opportunities. The other was about to invest in her future and it bothered me that there wasn’t as clear a path as there used to be.

Artificial intelligence – diagnosing patients on the other side of the world – is promising to change medicine itself. The first AI attorney, ‘Ross’, had been hired by a NYC firm. The education system in the United States wasn’t factoring this sort of thing in (unless maybe if you’re in the MIT Media Lab), so I was pretty sure that the education systems in the Caribbean and Latin America weren’t factoring it in. I’ve been playing with Natural Language Processing and Deep Learning myself, and was amazed at what already could be done.

The technology threat to jobs – to employment – has historically been robotics, something that has displaced enough workers to cause a stir over the last decades – but it has been largely thought that technology would only replace the blue collar jobs. Hubris. Any job that requires research, repetition, and can allow for reduced costs for companies is a target. Watson’s bedside manner might be a little more icy than House, but the results aren’t fiction.

What are the jobs of the future, for those kids in, starting or just finished with a tertiary education? It’s a gamble by present reckoning. Here are a few thoughts, though:

  • A job that requires legal responsibility is pretty safe, so far. While Watson made that diagnosis, for legal reasons I am certain that licensed doctors were the ones that dealt with the patient, as well as gave the legal diagnosis.
  • Dealing well with humans, which has been important for centuries, has just become much more important – it separates us from AI. So far.
  • Understanding the technology and, more importantly, the dynamic limits of the technology will be key.

Even with that, even as fast food outlets switch to touchscreens for ordering their food (imagine the disease vectors off of that!), even as AI’s become more and more prominent, the landscape is being shaken by technology driven by financial profit.

And I don’t think that it’s right that there’s no real plan for that. It’s coming, there is no stopping that, but what are we as a society doing to prepare the new work force for what is to come? What can be done?

Conversations might be a good place to start.

 

 

 

 

 

The Future Of Technology and Society (May 2016)

FutureIf you’re one of those who likes tl;dr, skip this post and find a tweet to read.

It has been bothering me. There are a bunch of very positive articles out there that do not touch on the problems we face in technology.

What I mean by this is that, since the early 1980s, I have been voraciously reading up on the future and plotting my own course through it as I go through long, dark tea-times of my career. It allows me to land where things are interesting to me, or where I can make a living for a while as I watch things settle into place. I’ve never been 100% accurate, but I have never starved and have done well enough even in 3rd world countries without advanced infrastructure or policy. Over the course of decades, I have adapted and found myself attempting to affect policies that I found limiting – something most people don’t really care about.

Today, we’re in exciting times. We have the buzz phrases of big data, deep learning and artificial intelligence floating around as if they were all something new rather than things that have advanced and have been re-branded to make them more palatable. Where in the 1990s the joke was that, “We have a pill for that!”, these days the joke is, “We have an app for that!”. As someone who has always striven to provide things of use to the world, I shook my head when flatulence apps went to war for millions of dollars.

Social networks erupted where people willingly give up their privacy to get things for ‘free’. A read of Daniel Solove’s 10 year old book, The Digital Person: Technology and Privacy in the Information Age, should have woken people up in 2006, but by then everyone was being trained to read 140 characters at a time and ‘tl;dr’ became a thing. I am pleased you made it this far, gentle reader, please continue.

Big Data

All these networks collect the big data. They have predicted pregnancies from shopping habits and been sued for it (Feb 2012). There’s a pretty good list of 10 issues with Big Data and Privacy – here’s some highlights (emphasis mine):

1. Privacy breaches and embarrassments.
2. Anonymization could become impossible.
3. Data masking could be defeated to reveal personal information.
4. Unethical actions based on interpretations.
5. Big data analytics are not 100% accurate.
6. Discrimination.
7. Few (if any) legal protections exist for the involved individuals.
8. Big data will probably exist forever.
9. Concerns for e-discovery.
10. Making patents and copyrights irrelevant.

Item 4, to me, is the largest one – coupled with 5 and 7, it gets downright ugly. Do you want people to make judgements about you based on interpretations of the data that aren’t 100% accurate, and where you have no legal protections?

Instead, the legal framework is biased towards those that collect the data – entities known as corporations (you may have heard of them) – through a grouping of disparate ideas known as intellectual property. In fact, in at least one country I know of, a database can be copyrighted (Trinidad and Tobago) even though the information in it isn’t new. Attempts are being made by some to make things better, but in the end they become feeble – if not brittle – under a legal system that is undeniably swayed by whoever has the most money.

If it sounds like I’m griping – 10 years ago I would have been. This is just a statement of fact at this point. I did what I could to inform over the years, as did others, but ultimately the choice was not that of a well informed minority but that of a poorly informed majority.

Deep Learning / Artificial Intelligence

Deep learning allows amazing things to be done with data. There is no question of that; I’ve played with it myself and done my own analyses on things I have been working on in my ‘spare time’ (read: I have no life). There’s a lot of hypotheses that can come from big data, but it’s the outliers within the big data that are actually the meat of any hypothesis.

In English, the exceptions create the rules which further define what needs to be looked at. For outliers in the data can mean that another bit of data needs to be added to the mix.

Artificial Intelligence (AI), on the other hand, can incorporate deep learning and big data. While an AI may not be able to write a news article that can fool an editor, I imagine it could fool the reading public. This is particularly true since, because of the income issues related to the Internet, media outlets have gone to pulp opinionated pieces instead of the factual news that used to inform rather than attempt to sway or just get more reads by echoing a demographic’s sentiment. Then it is shared by people of like-minded people on social media. It’s an epic charlie-foxtrot. 

People worry about jobs and careers in all of this with robots and AI, and a lot of white collar folks are thinking it will affect those in the blue collar jobs alone. No, it will not. There is an evolution taking place (some call it a revolution), and better paid white collar jobs are much juicier for saving money for people who care only about their stock price. 5 white collar jobs are already under the gun.

KFC and McDonalds have already begun robotizing. More are coming.

And then let’s discuss Ethics in the implementation of AI – look at what Microsoft did with their Twitter-bot, Tay. We have a large corporation putting an alleged AI (chatbot, whatever you want to call it) into a live environment without a thought to the consequences. Granted, it seemed like a simple evolution of Eliza (click the link to see what that means), but you don’t just let your dog off it’s leash or your AI out in an uncontrolled environment. It’s just not done, particularly in an environment where kids need ‘safe places’ and others need trigger warnings. If they didn’t have an army of lawyers – another issue with technology – they probably would have had their pants shaken severely in Courts across the world. Ahh, but they do have an army of well paid lawyers – which leads us to Intellectual Property.
Space Marines: Into the Future

Copyrights, Patents and Trademarks (and Privacy)

If you haven’t read anything about Copyright by Lawrence Lessig in the past decade, or Privacy by Daniel Solove, you’re akin to an unlicensed, blindfolded teenager joy riding in your Mom’s Corvette ZR1. Sure, things might be fun, but it’s a matter of time unless you’re really, really lucky. You shouldn’t be allowed near a computing device without these prerequisites because you’re uninformed. This is not alarmist. This is your reality.

And anyone writing code without this level of familiarity is driving an 18 wheeler in much the same way.

You need a lawyer just to flush a virtual toilet these days. I exaggerate to make the point – but maybe not. It would depend on who owns the virtual toilet.

You can convert any text into a patent application. Really.

Meanwhile, Patent trolls are finally seen as harming innovation. The key point here is that the entire system is biased toward those with more in the bank – which means that small companies are destroyed while the larger companies, such as Google and Oracle, have larger legal battles that impact more people than even know about it. Even writing software tools has become a legal battle between the behemoths.

‘Fair Use’ – the ability to use things you bought in ways that allow you to keep copies of them – has all but been lost in all of this.

Meanwhile, Wounded Warrior – an alleged veteran’s non-profit – has been suing other non-profits because of use of the phrase, ‘Wounded Warrior’. If you want to take the nice approach, they’re trying to avoid dilution of their trademark… at the cost of veterans themselves, but that doesn’t explain them suing two of their own former employees with PTSD.

And Here I Am, Wondering About The Future.

There are a bunch of very positive articles out there that do not touch on the problems we face in technology. Our technology is presently being held for ransom by legal frameworks that do not fit well; this in turn means our ability to innovate, and by proxy entrepreneurship, are also being held ransom. Meanwhile we have people running around with Stockholm Syndrome waiting for the next iPhone hand built by suicidal workers, or the next application that they can open their private data to (hi, Google, Microsoft!), or…

I can’t predict anything at this point. It used to be much simpler and, by proxy, easily controlled. The questions of whether to do something used to be an ethical question, but now we go to lawyers for ethics (a group that is largely not known for ethics – apologies to those who do). The governments institute policies biased by whoever funds the campaigns of politicians, or gives United States congress people nice things. It affects the entire world, and every few years I think it won’t last – it continues.

Too big to fail.

But out of all of this, I don’t mean to stop trying. I don’t mean to stop innovating, starting new businesses, etc. What I mean is – we have a lot of things to do properly to assure a future that isn’t as dim as I see it now, to assure that the kids who are hooked on realities that someone else created rather than what they imagined. Imagination itself needs to be revisited, cultivated and unleashed against all of this like a cool wind across the desert.

It cannot be done blindly. People need to understand all of this. And if you made it this far – congratulations – I offer that you should, if not share this, share the ideas within it freely rather than simply clicking ‘like’ and hoping for the best.

We cannot change things on our own.

As for myself – just surfing the waves as they come in, but I fully intend to build my house on a distant shore at this point.

 

 

 

Know Your Environment: Oh Tay

Artificial IntelligenceYou may remember – it happened only last week – that Microsoft’s Tay got shut down for being a jerk.

The commentators already have speculated – and in the broad strokes properly, in my opinion – that this is because people were jerks. Tay- which, from what I saw, was pretty much a simple ELIZA brought forward a few degrees1. It’s a bit more complicated than that, but in the really broad strokes, one has to wonder what Microsoft was actually thinking by setting Tay up on Twitter and not having some safeties in place.

Please tell me someone on that team said, “You know, I think that this is a bad idea. We should add some safeties.” It’s like sending a kid on her own to a prison to hang out with the inmates and not expecting something to go wrong. “Pick you up at 5 p.m., have a great time sweetie!”

Maybe you’re thinking, “Twitter isn’t that bad.” Maybe for you it isn’t – for me it isn’t – because I attenuate who and what I ‘listen’ to. Clearly Tay didn’t. All things considered, that’s a pretty important thing for people to do – and frankly, it does seem like we humans get that wrong more often than not.

As Software Engineers, we tend to forget sometimes that while we are building interactions within complex systems that our participation is a functional part of the design2. We shouldn’t just be tossing things out into a Production environment without understanding the environment.

Clearly, the decision to put Tay out there did not factor that in – because it’s blatantly obvious from the days of Usenet forward, people can be jerks. Trolls abound. It takes something a bit more to deal with that, and they should have known that.

I’d wager that at least one engineer said, “This is a bad idea”. Listen to your team.

1Sorry, Microsoft, but that’s kinda what it looked like to me.
2Recommended reading: Design as Participation.

SunTechRamble: Liability And Technology

Atomic CruiserThe really interesting thing that happened this week relates to the regulation of a computer system as a driver (at least in some circumstances).  It means that computer systems are gaining ‘privileges’ that were formerly only for humans. It was bound to happen sooner or later, but admittedly I blinked when I saw it.

Google’s efforts and it’s return in this area are noteworthy:

It appears that Google has persuaded federal regulators that — in some situations at least — the Tin Man has a heart.

In a letter sent this month to Google, Paul Hemmersbaugh, the chief counsel for the National Highway Traffic Safety Administration, seemed to accept that the computers controlling a self-driving car are the same as a human driver…

So there’s the very cool side of this where we could celebrate this as a win. Technology in this area has gotten to the point where we can replace humans as drivers by virtue of increased safety. Google has been posting monthly reports on their self driving car project, and it seems that self-driving cars greatest danger comes from behind.  Google’s first accident involving one of their vehicles was in July of last year – and they were rear-ended.

It’s going to get more complicated if you consider the architecture.

If the vehicle is self-contained, it means it will likely need software updates. That means that unpatched cars may be roaming the countryside, since unpatched software is all over the place.

If the vehicle is completely stupid without an internet connection, as the Amazon Echo is, then connectivity to the controlling application will be an issue.

It’s most likely to be a hybrid of both. Where does your responsibility as a passenger of a vehicle you own start and begin? Will you be able to modify your own vehicle as you can now? What about auto insurance, will that go away or will we be paying insurance on a vehicle we may not own and can’t control ourselves?

Technology and Law are about to meet again. It’s going to get messy.

You might want to start negotiating your side now.