Markerless Monocular Tracking of Articulated Human Motion

TitleMarkerless Monocular Tracking of Articulated Human Motion
Publication TypeConference Papers
Year of Publication2007
AuthorsLiu H, Chellappa R
Conference NameAcoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on
Date Published2007/04//
Keywordsanalysis;image, blob, camera;spatial-temporal, Chi, equations;markerless, flow;scaled, human, intensity;cameras;gait, model;articulated, monocular, MOTION, motion;global, optimization;human, orthographic, projection;single, recognition;image, sequence;linear, sequences;anatomical, sequences;optimisation;, structure;articulated, Tai, tracking;optical
Abstract

This paper presents a method for tracking general 3D general articulated human motion using a single camera with unknown calibration data. No markers, special clothes, or devices are assumed to be attached to the subject. In addition, both the camera and the subject are allowed to move freely, so that long-term view-independent human motion tracking and recognition are possible. We exploit the fact that the anatomical structure of the human body can be approximated by an articulated blob model. The optical flow under scaled orthographic projection is used to relate the spatial-temporal intensity change of the image sequence to the human motion parameters. These motion parameters are obtained by solving a set of linear equations to achieve global optimization. The correctness and robustness of the proposed method are demonstrated using Tai Chi sequences

DOI10.1109/ICASSP.2007.366002