Visual Homing Project: Methodology and Results

The visual homing project at IIT Bombay was a success and we were able to achieve homing on a quadcopter. The road to this point wasn’t smooth though. It was the third complete rewrite of the algorithm that finally worked. I earlier had written about the panoramic stitching algorithm in which the positions in the room were calculated on the basis of the corresponding matched feature position in a stitched image. The problem was, there were many parts of the room exactly similar to each other.

After that I attached 16 different feature points in the entire lab for the robot to identify and use as markers so that repetitions in the room may be avoided and then moved on to the panorama approach, but soon realized that finding keypoints would be a more robust approach.

So I wrote down a C++ code to detect labels in the rooms, using blob detection and colour segmentation in HSV colour space. The overall code was working okay and with a few tweaks here and there, I was able to detect labels pretty well. Although there were (and still are) some issues with lighting unsolved.

On top of that was the planner code, which was verified using MATLAB and just had to be converted to C++. Also there had to be a manual override cum teleoperation code that had to run in case the algorithm didn’t perform as expected. I wrote the former in C++ and latter in Python (the goodness of ROS 😀 ).

Image divided into regions of interest by Green and Red lines, Blue lines denote labels

My initial approach for detecting 3 labels per image was to segment the image into sections having enough keypoints and then applying the image processing over again. It was not a great approach and I eventually shifted to processing the keypoints already obtained, which turned out to be better. Also, initially I was using the rotZ readings of the ARDrone v2, but they tend to drift and make the algorithm worthless. So I shifted to noisier Magnetometer readings which did not drift.

Finally, after a lot of testing and change of approaches, the last one worked out pretty well and here’s a screenshot of it running and giving command velocity commands.

Best approach, directly using the already obtained keypoints

As usual, all the code is uploaded on a Github repository, which is now public. I would also upload a video of it running on the quadcopter soon. I learned a lot in the process and my first experience with aerial robotics was rather good. I hope to turn this initiation into a passion soon.

One thought on “Visual Homing Project: Methodology and Results

Leave a Reply

Your email address will not be published. Required fields are marked *