I have been a big, big fan of space stuffs ever since I was a kid. I can totally attribute this to TV and comics.
This love for space kind of matured in college. I joined Team Studsat (to build a 10*10*30 cm^3 imaging space satellite) and later on joined a course on ‘Space and Rocket Dynamics’.
Naturally, I was excited on 5th August, 2012, when the Mars Curiosity rover landed on Gale Crater. The way NASA launched and landed the vehicle was enough to inspire me to check out robotics.
It has been eight months since. I did a little bit of research. I discovered Android, openCV and AI. And I decided to mix them up to create my own human-friendly mini-rover.
Some features already implemented on the rover are voice recognition, voice synthesis, voice control, motion detection, face detection, line following, automatic guided vehicle, remote surveillance and gyroscope-based remote control. It can be controlled over HTTP (the Internet, intranet, cloud, etc.) or through SMS. The rover doesn’t use any sensor apart from those provided with the Android device. The Android camera, used with openCV, is the primary sensor. The rover is not suitable for space exploration, but hey… I don’t have the budget for that.
In the next few posts, I’ll be writing about the implementation details of various features.