基於空間時序隱馬爾科夫模型的人體三維動作捕捉數據行為識別

Translated title of the contribution: Recognition of Human Actions from 3D Motion Capture Data Based on a Novel Spatial-temporal Hidden Markov Model

趙瓊, 葉豪盛, 周學海

Research output: Conference PapersRGC 32 - Refereed conference paper (without host publication)peer-review

Abstract

This paper presents a novel Hidden Markov Models, called spatial-temporal hidden markov models, for the recognition of human actions from 3D motion capture (MoCap) data. We exploit both the spatial dependency between each pairs of connected joints of the articulated skeletal structure and the continuous temporal movement of the individual joints. Each action is represented by a sequence of 3D joints positions, and a single spatial-temporal hidden markov model is learnt to capture these two types of dependencies for each action class. Results of recognizing 11 different action classes show that our approach outperforms traditional HMM.
Translated title of the contributionRecognition of Human Actions from 3D Motion Capture Data Based on a Novel Spatial-temporal Hidden Markov Model
Original languageChinese (Traditional)
Publication statusPublished - 13 Oct 2010
EventChinagraph 2010 - Nanjing, China
Duration: 13 Oct 201015 Oct 2010

Conference

ConferenceChinagraph 2010
PlaceChina
CityNanjing
Period13/10/1015/10/10

Fingerprint

Dive into the research topics of 'Recognition of Human Actions from 3D Motion Capture Data Based on a Novel Spatial-temporal Hidden Markov Model'. Together they form a unique fingerprint.

Cite this