Multiview Skeletal Interaction Recognition Using Active Joint Interaction Graph

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

18 Scopus Citations
View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Article number7577818
Pages (from-to)2293-2302
Journal / PublicationIEEE Transactions on Multimedia
Volume18
Issue number11
Publication statusPublished - 1 Nov 2016

Abstract

This paper addresses the problem of recognizing human skeletal interactions using multiview data captured from depth sensors. The interactions among people are important cues for group and crowd human behavior analysis. In this paper, we focus on modeling the person-person skeletal interactions for human activity recognition. First, we propose a novel graph model in each single-view case to encode class-specific person-person interaction patterns. Particularly, we model each person-person interaction by an attributed graph, which is designed to preserve the complex spatial structure among skeletal joints according to their activity levels as well as the spatio-temporal joint features. Then, combining the graph models for each single-view case, we propose the multigraph model to characterize each multiview interaction. Finally, we apply a general multiple kernel learning method to determine the optimal kernel weights for the proposed multigraph model while the optimal classifier is jointly learned. We evaluate the proposed approach on the M2I dataset, the SBU Kinect interaction dataset, and our interaction dataset. The experimental results show that our proposed approach outperforms several existing interaction recognition methods.

Research Area(s)

  • Activity analysis, depth sensor, graph kernel, graph-based modeling, human interaction recognition, multiple kernel learning, multiview