Retrieval of spatial–temporal motion topics from 3D skeleton data
Research output: Journal Publications and Reviews (RGC: 21, 22, 62) › 21_Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Pages (from-to) | 973–984 |
Journal / Publication | Visual Computer |
Volume | 35 |
Issue number | 6-8 |
Online published | 6 May 2019 |
Publication status | Published - Jun 2019 |
Link(s)
Abstract
Retrieval of a specific human motion from 3D skeleton data is intractable because of its articulated complexity. We propose a context-based motion document formation method to reflect geometric variations by calculating covariance descriptors among skeletal joint locations and joint relative distances, and temporal variations by performing a coarse-to-fine segmentation on the motion sequence. The descriptors of query motion traverse all the motion categories to lock its motion words, which can be regarded as the basic units of a motion document. The discrete motion words of different spatiotemporal descriptors are also mapped to divergent index ranges to add prior knowledge of motion with temporal order to latent Dirichlet allocation (LDA). The similarity matching is based on motion-topic distributions from LDA with semantic meanings. The experiments on public datasets show the effectiveness and robustness of the proposed method over existing models.
Research Area(s)
- Latent Dirichlet allocation, Motion documents, Skeleton-based motion retrieval, Spatial–temporal descriptors
Citation Format(s)
Retrieval of spatial–temporal motion topics from 3D skeleton data. / Men, Qianhui; Leung, Howard.
In: Visual Computer, Vol. 35, No. 6-8, 06.2019, p. 973–984.Research output: Journal Publications and Reviews (RGC: 21, 22, 62) › 21_Publication in refereed journal › peer-review