Action recognition based on human movement characteristics

TitleAction recognition based on human movement characteristics
Publication TypeConference Papers
Year of Publication2009
AuthorsDondera R, Doermann D, Davis LS
Conference NameMotion and Video Computing, 2009. WMVC '09. Workshop on
Date Published2009/12//
Keywordsaction, ballistic, characteristics;motion, correlated, cost;human, data;probability, databases;, density, descriptor;motion, dynamics;computational, function;robustness;shape, information;short, linear, Movement, movements;computer, recognition;human, recognition;stability;visual, vector, vision;pattern

We present a motion descriptor for human action recognition where appearance and shape information are unreliable. Unlike other motion-based approaches, we leverage image characteristics specific to human movement to achieve better robustness and lower computational cost. Drawing on recent work on motion recognition with ballistic dynamics, an action is modeled as a series of short correlated linear movements and represented with a probability density function over motion vector data. We are targeting common human actions composed of ballistic movements, and our descriptor can handle both short actions (e.g. reaching with the hand) and long actions with events at relatively stable time offsets (e.g. walking). The proposed descriptor is used for both classification and detection of action instances, in a nearest-neighbor framework. We evaluate the descriptor on the KTH action database and obtain a recognition rate of 90% in a relevant test setting, comparable to the state-of-the-art approaches that use other cues in addition to motion. We also acquired a database of actions with slight occlusion and a human actor manipulating objects of various shapes and appearances. This database makes the use of appearance and shape information problematic, but we obtain a recognition rate of 95%. Our work demonstrates that human movement has distinctive patterns, and that these patterns can be used effectively for action recognition.