Autonomous Visual Homing Project

Started: December 2016

Stopped: Early January 2017

Areas worked on: Image feature extraction, Programming the planner, Low level control implementation

Current Tasks: Programming the planner, Extracting Image features, testing for convergence, implementing ROS code

I worked under Prof. Leena Vachhani of the Systems & Control engineering department, IIT Bombay for a period of 1 month. My work aims towards returning a quadcopter to a set homing position using only visual sensor data.

401061-parrot-ar-drone-2-0I worked on image processing, motion planning, simulation and low level control on an UAV and implemented this paper over a course of one month, from December 4th to January 4th. I worked on the Parrot AR Drone v2.

All the code is being updated in this Github repo. (Private for now, would be made public in January).

Here are the posts that define what I did in this project:

In brief, I wrote ROS connected codes to detect feature points, plan the path using the algorithm described in the paper, and perform the locomotion while taking only feedback from the monocular camera present on the ARDrone v2.

I’d upload a video of the algorithm working on quad soon.

 

This entry was posted in Uncategorized. Bookmark the permalink.