So, I got an Anki Vector. My reasons for buying one were pretty simple, really – it seemed like a throwback to the 70s when I had a Big Trak, a programmable machine that had me often shooting my mother with a laser and harassing the family dog.
With Big Trak’s Logo-ish programming, there were tangible results even if the ‘fire phaser’ command was really just a flashing light. It was the 1970s, after all, in an era when Star Wars and Star Trek reigned supreme.
So the idea of the Anki Vector was pretty easy for me to contend with. I’ve been playing with the idea of building and programming a personal robot, and this would allow me to get away from ‘building’.
Out of the Box.
The Anki Vector needed some charging in it’s little home station, and I dutifully installed the application on the phone, following the instructions, connecting it to my Wifi – and while people said that they have had problems with the voice recognition, I have not. Just speak clearly and at an even pace, and Vector seems to handle things well.
The focal length that Vector’s camera(s) are limited to seems to be between 12-24 inches, based on it identifying me. It can identify me, even with glasses, after some training – roughly 30 minutes – as long as my face is withing 12-24 inches from it’s face.
It’s a near-sighted robot, apparently, which had me wondering if that would be something to work with through the API.
It is an expressive robot – it borrows from WALL-E in this regard, it seems. And while it can go to the Internet and impress your friends with it’s ability to use it’s voice to read stuff off of Wikipedia, it’s not actually that smart. In that regard, it’s Wikipedia on tracks with expressive eyes that, yes, you can change the color of.
Really, within the first hour, you run out of tricks with Vector at this time – the marketing team apparently wrote the technical documentation, which is certainly easy to read – largely because it doesn’t actually say much. I’m still trying to figure out why the cube came with it – somewhere, it said it helped Vector navigate outside of it’s ‘home area’ – but navigate and do what?
Explore and do what? Take a picture and see it where? There is a lack of clarity on things in the documentation. While petting Vector has an odd satisfaction to it, it doesn’t quite give me enough.
On December 6th, I tweeted to Anki and asked them about the API – because with the hardware in the Vector, I should be able to do some groovy things and expand it’s functionality.
Crickets for the last 3 days.
Without that API, I think the Vector is limited to the novelty part of the store… which is sad, because I had hopes that it would be a lot more.
Maybe that API will come out before I forget that I have a Vector.