Retrieval of spatial–temporal motion topics from 3D skeleton data

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

5 Scopus Citations
View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)973–984
Journal / PublicationVisual Computer
Volume35
Issue number6-8
Online published6 May 2019
Publication statusPublished - Jun 2019

Abstract

Retrieval of a specific human motion from 3D skeleton data is intractable because of its articulated complexity. We propose a context-based motion document formation method to reflect geometric variations by calculating covariance descriptors among skeletal joint locations and joint relative distances, and temporal variations by performing a coarse-to-fine segmentation on the motion sequence. The descriptors of query motion traverse all the motion categories to lock its motion words, which can be regarded as the basic units of a motion document. The discrete motion words of different spatiotemporal descriptors are also mapped to divergent index ranges to add prior knowledge of motion with temporal order to latent Dirichlet allocation (LDA). The similarity matching is based on motion-topic distributions from LDA with semantic meanings. The experiments on public datasets show the effectiveness and robustness of the proposed method over existing models.

Research Area(s)

  • Latent Dirichlet allocation, Motion documents, Skeleton-based motion retrieval, Spatial–temporal descriptors