Monocular human motion tracking with discriminative sparse representation

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

2 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Pages (from-to)403-414
Journal / PublicationAdvanced Robotics
Volume28
Issue number6
Online published15 Jan 2014
Publication statusPublished - 2014

Abstract

In this work, we address the problem of monocular tracking the human motion based on the discriminative sparse representation. The proposed method jointly trains the dictionary and the discriminative linear classifier to separate the human being from the background. We show that using the online dictionary learning, the tracking algorithm can adapt the variation of human appearance and background environment. We compared the proposed method with four state-of-the-art tracking algorithms on eight benchmark video clips (Faceocc, Sylv, David, Singer, Girl, Ballet, OneLeaveShopReenter2cor, and ThreePastShop2cor). Qualitative and quantitative experimental validation results are discussed at length. The proposed algorithm for human tracking achieves superior tracking results, and a Matlab run time on a standard desktop machine of four frames per second. © 2014 Taylor & Francis and The Robotics Society of Japan.

Research Area(s)

  • appearance model, human tracking, sparse representation

Citation Format(s)

Monocular human motion tracking with discriminative sparse representation. / Bai, Tianxiang; Li, Youfu; Zhou, Xiaolong.
In: Advanced Robotics, Vol. 28, No. 6, 2014, p. 403-414.

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review