Silhouette-based gesture and action recognition via modeling trajectories on Riemannian shape manifolds

TitleSilhouette-based gesture and action recognition via modeling trajectories on Riemannian shape manifolds
Publication TypeJournal Articles
Year of Publication2011
AuthorsAbdelkader MF, Abd-Almageed W, Srivastava A, Chellappa R
JournalComputer Vision and Image Understanding
Volume115
Issue3
Pagination439 - 455
Date Published2011/03//
ISBN Number1077-3142
KeywordsAction recognition, Gesture recognition, Riemannian manifolds, Shape space, Silhouette-based approaches
Abstract

This paper addresses the problem of recognizing human gestures from videos using models that are built from the Riemannian geometry of shape spaces. We represent a human gesture as a temporal sequence of human poses, each characterized by a contour of the associated human silhouette. The shape of a contour is viewed as a point on the shape space of closed curves and, hence, each gesture is characterized and modeled as a trajectory on this shape space. We propose two approaches for modeling these trajectories. In the first template-based approach, we use dynamic time warping (DTW) to align the different trajectories using elastic geodesic distances on the shape space. The gesture templates are then calculated by averaging the aligned trajectories. In the second approach, we use a graphical model approach similar to an exemplar-based hidden Markov model, where we cluster the gesture shapes on the shape space, and build non-parametric statistical models to capture the variations within each cluster. We model each gesture as a Markov model of transitions between these clusters. To evaluate the proposed approaches, an extensive set of experiments was performed using two different data sets representing gesture and action recognition applications. The proposed approaches not only are successfully able to represent the shape and dynamics of the different classes for recognition, but are also robust against some errors resulting from segmentation and background subtraction.

URLhttp://www.sciencedirect.com/science/article/pii/S1077314210002377
DOI10.10.006