Focalized contrastive view-invariant learning for self-supervised skeleton-based action recognition

Qianhui Men*, Edmond S.L. Ho, Hubert P.H. Shum, Howard Leung

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

25 Citations (Scopus)

Abstract

Learning view-invariant representation is a key to improving feature discrimination power for skeleton-based action recognition. Existing approaches cannot effectively remove the impact of viewpoint due to the implicit view-dependent representations. In this work, we propose a self-supervised framework called Focalized Contrastive View-invariant Learning (FoCoViL), which significantly suppresses the view-specific information on the representation space where the viewpoints are coarsely aligned. By maximizing mutual information with an effective contrastive loss between multi-view sample pairs, FoCoViL associates actions with common view-invariant properties and simultaneously separates the dissimilar ones. We further propose an adaptive focalization method based on pairwise similarity to enhance contrastive learning for a clearer cluster boundary in the learned space. Different from many existing self-supervised representation learning work that rely heavily on supervised classifiers, FoCoViL performs well on both unsupervised and supervised classifiers with superior recognition performance. Extensive experiments also show that the proposed contrastive-based focalization generates a more discriminative latent representation. © 2023 Elsevier B.V.
Original languageEnglish
Pages (from-to)198-209
JournalNeurocomputing
Volume537
Online published31 Mar 2023
DOIs
Publication statusPublished - 7 Jun 2023

Research Keywords

  • Contrastive learning
  • Self-supervised learning
  • Skeleton-based action recognition

Fingerprint

Dive into the research topics of 'Focalized contrastive view-invariant learning for self-supervised skeleton-based action recognition'. Together they form a unique fingerprint.

Cite this