Recognition of Human Actions from 3D Motion Capture Data Based on a Novel Spatial-temporal Hidden Markov Model
Research output: Conference Papers (RGC: 31A, 31B, 32, 33) › 32_Refereed conference paper (no ISBN/ISSN) › Not applicable › peer-review
Related Research Unit(s)
|Original language||Chinese (Traditional)|
|Publication status||Published - 13 Oct 2010|
|Period||13 - 15 October 2010|
This paper presents a novel Hidden Markov Models, called spatial-temporal hidden markov models, for the recognition of human actions from 3D motion capture (MoCap) data. We exploit both the spatial dependency between each pairs of connected joints of the articulated skeletal structure and the continuous temporal movement of the individual joints. Each action is represented by a sequence of 3D joints positions, and a single spatial-temporal hidden markov model is learnt to capture these two types of dependencies for each action class. Results of recognizing 11 different action classes show that our approach outperforms traditional HMM.