Human action recognition via skeletal and depth based feature fusion

Research output: Chapters, Conference Papers, Creative and Literary Works (RGC: 12, 32, 41, 45)32_Refereed conference paper (with ISBN/ISSN)peer-review

19 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publicationProceedings - Motion in Games 2016: 9th International Conference on Motion in Games, MIG 2016
PublisherAssociation for Computing Machinery, Inc
Pages123-132
ISBN (Print)9781450345927
Publication statusPublished - 10 Oct 2016

Conference

Title9th International Conference on Motion in Games, MIG 2016
PlaceUnited States
CitySan Francisco
Period10 - 12 October 2016

Link(s)

Abstract

This paper addresses the problem of recognizing human actions captured with depth cameras. Human action recognition is a challenging task as the articulated action data is high dimensional in both spatial and temporal domains. An effective approach to handle this complexity is to divide human body into different body parts according to human skeletal joint positions, and performs recognition based on these part-based feature descriptors. Since different types of features could share some similar hidden structures, and different actions may be well characterized by properties common to all features (sharable structure) and those specific to a feature (specific structure), we propose a joint group sparse regression-based learning method to model each action. Our method can mine the sharable and specific structures among its part-based multiple features meanwhile imposing the importance of these part-based feature structures by joint group sparse regularization, in favor of discriminative part-based feature structure selection. To represent the dynamics and appearance of the human body parts, we employ part-based multiple features extracted from skeleton and depth data respectively. Then, using the group sparse regularization techniques, we have derived an algorithm for mining the key part-based features in the proposed learning framework. The resulting features derived from the learnt weight matrices are more discriminative for multi-task classification. Through extensive experiments on three public datasets, we demonstrate that our approach outperforms existing methods.

Research Area(s)

  • Action recognition, Feature fusion, Group sparse, Regularization

Citation Format(s)

Human action recognition via skeletal and depth based feature fusion. / Li, Meng; Leung, Howard; Shum, Hubert P. H.

Proceedings - Motion in Games 2016: 9th International Conference on Motion in Games, MIG 2016. Association for Computing Machinery, Inc, 2016. p. 123-132 2994268.

Research output: Chapters, Conference Papers, Creative and Literary Works (RGC: 12, 32, 41, 45)32_Refereed conference paper (with ISBN/ISSN)peer-review

Download Statistics

No data available