TY - JOUR T1 - Multicamera Tracking of Articulated Human Motion Using Shape and Motion Cues JF - Image Processing, IEEE Transactions on Y1 - 2009 A1 - Sundaresan, A. A1 - Chellapa, Rama KW - 2D shape cues KW - 3D shape cues KW - algorithms KW - Anatomic;Models KW - articulated human motion KW - automatic algorithm KW - Biological;Movement;Posture;Skeleton;Video Recording; KW - Computer-Assisted;Models KW - Eigenvalues and eigenfunctions KW - human pose estimation KW - HUMANS KW - Image motion analysis KW - IMAGE PROCESSING KW - image registration KW - Image segmentation KW - Image sequences KW - kinematic singularity KW - Laplacian eigenmaps KW - multicamera tracking algorithm KW - pixel displacement KW - pose estimation KW - single-frame registration technique KW - temporal registration method KW - tracking AB - We present a completely automatic algorithm for initializing and tracking the articulated motion of humans using image sequences obtained from multiple cameras. A detailed articulated human body model composed of sixteen rigid segments that allows both translation and rotation at joints is used. Voxel data of the subject obtained from the images is segmented into the different articulated chains using Laplacian eigenmaps. The segmented chains are registered in a subset of the frames using a single-frame registration technique and subsequently used to initialize the pose in the sequence. A temporal registration method is proposed to identify the partially segmented or unregistered articulated chains in the remaining frames in the sequence. The proposed tracker uses motion cues such as pixel displacement as well as 2-D and 3-D shape cues such as silhouettes, motion residue, and skeleton curves. The tracking algorithm consists of a predictor that uses motion cues and a corrector that uses shape cues. The use of complementary cues in the tracking alleviates the twin problems of drift and convergence to local minima. The use of multiple cameras also allows us to deal with the problems due to self-occlusion and kinematic singularity. We present tracking results on sequences with different kinds of motion to illustrate the effectiveness of our approach. The pose of the subject is correctly tracked for the duration of the sequence as can be verified by inspection. VL - 18 SN - 1057-7149 CP - 9 M3 - 10.1109/TIP.2009.2022290 ER -