Graph-based representation learning for automatic human motion segmentation

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

5 Scopus Citations
View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)9205–9224
Journal / PublicationMultimedia Tools and Applications
Volume75
Issue number15
Online published18 Apr 2016
Publication statusPublished - Aug 2016

Abstract

3D human motion segmentation is a primary processing step for recognition and analysis of motion data recorded by motion capture system. We propose a novel graph based method to segment long motion sequences into segments of different actions. We first introduce a novel Active Joint Relationship Graph (AJRG) construction method which connects the joints that are deemed important for motion segments. In particular, the top-N Relative Ranges of Joint Relative Distances (RRJRD) are proposed to determine which joints should be connected in the resulting graph because these measures indicate the normalized activity levels among the joint pairs. Different motion segments may thus result in different graph structures so the construction of the graphs is made adaptively to the characteristics of these segments and is able to represent a meaningful spatial structure. Second, combining with proposed joint covariance descriptor with temporal pyramid, we give the Active Joint Relationship Graph Kernel (AJRGK) to measure the spatio-temporal consistency between two consecutive motion segments. Furthermore, we propose the Graph Kernel-Based Temporal Cut (GKBTC) to segment the given motion sequences. The experimental results show that our method demonstrates superior performance in comparison to state-of-the-art methods for 3D human motion segmentation.

Research Area(s)

  • Multimedia, Motion representation, Graph kernel matching, Motion segmentation