From AndRo.bot To Pronghorn – The Mistakes, The Lessons and The Draft

Somewhere in another post in this blog I mentioned that I was planning to upgrade the AndRo.bot rover that I was  working on. This decision was made long before AndRo was even complete. AndRo was jinxed from the onset.  The initial plan to route all user communications through the cloud proved disastrous. The latency and redundancy introduced by the cloud was so costly for the real-time data communication, that I ended up using an embedded HTTP server instead. But then new problems started showing up. This was mainly due to me using Android. Not only was Java crappy, but even the phone’s myriad background services and puny processing power were taking its tolls. Even the hardware was pretty much basic (software was the priority). But people were impressed. And it was appreciated.

But as I already mentioned, I was not satisfied. Although I learnt a lot during the process of making it, most of the learning was from mistakes. And programming is a lot analogous to life in this case – You make a big enough mistake, you are scarred for life. In my program, the scars were redundant, bottle-necks embedded somewhere in the 7000 lines of codes. The most prominent mistakes were (1) JAVA and OpenCV, along with a background HTTP server and lots of sensor processing, on an Android. ROFLMAO. What was I even thinking? (2) Prioritizing software over hardware, resulting in a hardware that was basically crap.

But there is something you can do with programs that you can’t do with life – delete it. And that’s what I decided to do. If I wanted to make something remotely useful for any application more critical than playing around or spying on people, I had to restart from scratch. And that’s how the idea behind Pronghorn was born.

Pronghorn would have a pan/tilt stereo camera, a robotic arm and four-motors differential drive. The motors (pan/tilt, arm & differential drive) would be controlled using an Arduino. The stereo-camera (SimpleCV) and AI (PyBrain) would run using Python on a Raspberry Pi. Mono-camera, communication, voice synthesis and speech recognition would run on an Android, as before. See what I just did? I mentioned hardware before software. Priorities change, man. That’s life. In fact, I won’t talk about software in this post at all. But you can easily presume that Pronghorn would feature several functionalities that AndRo.bot didn’t have, including, among other things, autonomous navigation, some amount of natural language control, 2D & 3D live streaming and a much more interactive UI.

In the next few weeks or months (depending on the availability of money and parts), I’ll work exclusively on the hardware. I’ll start working on the pan/tilt platform for the stereo-camera towards the end of this week. It’s based on Sparkfun’s sensor pan/tilt.

stereo-camera pan-tilt

The architecture of the pan/tilt platform for the stereo camera

Advertisements