Fusing Depth and Video Using Rao-Blackwellized Particle Filter

TitleFusing Depth and Video Using Rao-Blackwellized Particle Filter
Publication TypeBook Chapters
Year of Publication2005
AuthorsAgrawal A, Chellappa R
EditorPal S, Bandyopadhyay S, Biswas S
Book TitlePattern Recognition and Machine IntelligencePattern Recognition and Machine Intelligence
Series TitleLecture Notes in Computer Science
Volume3776
Pagination521 - 526
PublisherSpringer Berlin / Heidelberg
ISBN Number978-3-540-30506-4
Abstract

We address the problem of fusing sparse and noisy depth data obtained from a range finder with features obtained from intensity images to estimate ego-motion and refine 3D structure of a scene using a Rao-Blackwellized particle filter. For scenes with low depth variability, the algorithm shows an alternate way of performing Structure from Motion (SfM) starting with a flat depth map. Instead of using 3D depths, we formulate the problem using 2D image domain parallax and show that conditioned on non-linear motion parameters, the parallax magnitude with respect to the projection of the vanishing point forms a linear subsystem independent of camera motion and their distributions can be analytically integrated. Thus, the structure is obtained by estimating parallax with respect to the given depths using a Kalman filter and only the ego-motion is estimated using a particle filter. Hence, the required number of particles becomes independent of the number of feature points which is an improvement over previous algorithms. Experimental results on both synthetic and real data show the effectiveness of our approach.

URLhttp://dx.doi.org/10.1007/11590316_82