Blimp Localization

To cut a long story short, my supervisor has a small blimp and wants to know its position relative to aerial images taken before hand. The blimp is going to carry a small camera and the images that the camera captures will be matched against our set of aerial images.

Currently I am using Webots® and an example blimp model provided by Webots to simulate my localization situation. A set of maps taken from yahoo maps are laid out on a square array of tiles to represent the ground and the blimp model is flown across the map capturing images with a camera attached to the blimp model. I am then required to match the captured images to a map taken from google. Note that two different map sources were used so as to be localizing on images taken at different times. Below is a snapshot of the environment in action.

A white blimp flies over Hamburg, I have to find where it is and where it's going. But the main question in the above image is who decided that the snipping tool in Windows Vista should take irregular sized images.

Our general approach is the use of scale invariant features (SIFT) in conjunction with Bayesian filters to perform localization. The reason that we turned to SIFT was because we wanted to handle kidnap situations and global localization situations. Matching extracted features from the camera image to map features is a quick process (O(log(size_of_map)+size_of_image)) allow us to quickly realize an unexpected change in the current situation and update our estimate promptly.

We decided to use Bayesian filters so that the system could take filter out false matches and refine our estimate over time. We actually get a large number of false matches when localizing using a series of frames and our system has to be able to accumulate the correct matches that we get while incrementally updating its estimate. Bayesian filters allow for this sort of iterative accumulation to happen.


Amir H. Bakhtiary




Blix theme adapted by David Gilbert, powered by PmWiki