Multi-camera Tracking of Articulated Human Motion Using Motion and Shape Cues

TitleMulti-camera Tracking of Articulated Human Motion Using Motion and Shape Cues
Publication TypeBook Chapters
Year of Publication2006
AuthorsSundaresan A, Chellappa R
EditorNarayanan P, Nayar S, Shum H-Y
Book TitleComputer Vision – ACCV 2006Computer Vision – ACCV 2006
Series TitleLecture Notes in Computer Science
Volume3852
Pagination131 - 140
PublisherSpringer Berlin / Heidelberg
ISBN Number978-3-540-31244-4
Abstract

We present a framework and algorithm for tracking articulated motion for humans. We use multiple calibrated cameras and an articulated human shape model. Tracking is performed using motion cues as well as image-based cues (such as silhouettes and “motion residues” hereafter referred to as spatial cues,) as opposed to constructing a 3D volume image or visual hulls. Our algorithm consists of a predictor and corrector: the predictor estimates the pose at the t + 1 using motion information between images at t and t + 1. The error in the estimated pose is then corrected using spatial cues from images at t + 1. In our predictor, we use robust multi-scale parametric optimisation to estimate the pixel displacement for each body segment. We then use an iterative procedure to estimate the change in pose from the pixel displacement of points on the individual body segments. We present a method for fusing information from different spatial cues such as silhouettes and “motion residues” into a single energy function. We then express this energy function in terms of the pose parameters, and find the optimum pose for which the energy is minimised.

URLhttp://dx.doi.org/10.1007/11612704_14