基於空間時序隱馬爾科夫模型的人體三維動作捕捉數據行為識別

Recognition of Human Actions from 3D Motion Capture Data Based on a Novel Spatial-temporal Hidden Markov Model

Research output: Conference Papers (RGC: 31A, 31B, 32, 33)32_Refereed conference paper (no ISBN/ISSN)Not applicablepeer-review

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageChinese (Traditional)
Publication statusPublished - 13 Oct 2010

Conference

TitleChinagraph 2010
PlaceChina
CityNanjing
Period13 - 15 October 2010

Abstract

This paper presents a novel Hidden Markov Models, called spatial-temporal hidden markov models, for the recognition of human actions from 3D motion capture (MoCap) data. We exploit both the spatial dependency between each pairs of connected joints of the articulated skeletal structure and the continuous temporal movement of the individual joints. Each action is represented by a sequence of 3D joints positions, and a single spatial-temporal hidden markov model is learnt to capture these two types of dependencies for each action class. Results of recognizing 11 different action classes show that our approach outperforms traditional HMM.

Citation Format(s)

基於空間時序隱馬爾科夫模型的人體三維動作捕捉數據行為識別. / 趙瓊; 葉豪盛; 周學海.

2010. Paper presented at Chinagraph 2010, Nanjing, China.

Research output: Conference Papers (RGC: 31A, 31B, 32, 33)32_Refereed conference paper (no ISBN/ISSN)Not applicablepeer-review