Robust visual tracking with structured sparse representation appearance model

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

104 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Pages (from-to)2390-2404
Journal / PublicationPattern Recognition
Volume45
Issue number6
Publication statusPublished - Jun 2012

Abstract

In this paper, we present a structured sparse representation appearance model for tracking an object in a video system. The mechanism behind our method is to model the appearance of an object as a sparse linear combination of structured union of subspaces in a basis library, which consists of a learned Eigen template set and a partitioned occlusion template set. We address this structured sparse representation framework that preferably matches the practical visual tracking problem by taking the contiguous spatial distribution of occlusion into account. To achieve a sparse solution and reduce the computational cost, Block Orthogonal Matching Pursuit (BOMP) is adopted to solve the structured sparse representation problem. Furthermore, aiming to update the Eigen templates over time, the incremental Principal Component Analysis (PCA) based learning scheme is applied to adapt the varying appearance of the target online. Then we build a probabilistic observation model based on the approximation error between the recovered image and the observed sample. Finally, this observation model is integrated with a stochastic affine motion model to form a particle filter framework for visual tracking. Experiments on some publicly available benchmark video sequences demonstrate the advantages of the proposed algorithm over other state-of-the-art approaches. © 2011 Elsevier Ltd. All rights reserved.

Research Area(s)

  • Appearance model, Block-sparsity, Orthogonal matching pursuit, Sparse representation, Visual tracking