TY - JOUR
T1 - Focalized contrastive view-invariant learning for self-supervised skeleton-based action recognition
AU - Men, Qianhui
AU - Ho, Edmond S.L.
AU - Shum, Hubert P.H.
AU - Leung, Howard
PY - 2023/6/7
Y1 - 2023/6/7
N2 - Learning view-invariant representation is a key to improving feature discrimination power for skeleton-based action recognition. Existing approaches cannot effectively remove the impact of viewpoint due to the implicit view-dependent representations. In this work, we propose a self-supervised framework called Focalized Contrastive View-invariant Learning (FoCoViL), which significantly suppresses the view-specific information on the representation space where the viewpoints are coarsely aligned. By maximizing mutual information with an effective contrastive loss between multi-view sample pairs, FoCoViL associates actions with common view-invariant properties and simultaneously separates the dissimilar ones. We further propose an adaptive focalization method based on pairwise similarity to enhance contrastive learning for a clearer cluster boundary in the learned space. Different from many existing self-supervised representation learning work that rely heavily on supervised classifiers, FoCoViL performs well on both unsupervised and supervised classifiers with superior recognition performance. Extensive experiments also show that the proposed contrastive-based focalization generates a more discriminative latent representation. © 2023 Elsevier B.V.
AB - Learning view-invariant representation is a key to improving feature discrimination power for skeleton-based action recognition. Existing approaches cannot effectively remove the impact of viewpoint due to the implicit view-dependent representations. In this work, we propose a self-supervised framework called Focalized Contrastive View-invariant Learning (FoCoViL), which significantly suppresses the view-specific information on the representation space where the viewpoints are coarsely aligned. By maximizing mutual information with an effective contrastive loss between multi-view sample pairs, FoCoViL associates actions with common view-invariant properties and simultaneously separates the dissimilar ones. We further propose an adaptive focalization method based on pairwise similarity to enhance contrastive learning for a clearer cluster boundary in the learned space. Different from many existing self-supervised representation learning work that rely heavily on supervised classifiers, FoCoViL performs well on both unsupervised and supervised classifiers with superior recognition performance. Extensive experiments also show that the proposed contrastive-based focalization generates a more discriminative latent representation. © 2023 Elsevier B.V.
KW - Contrastive learning
KW - Self-supervised learning
KW - Skeleton-based action recognition
UR - http://www.scopus.com/inward/record.url?scp=85151545094&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85151545094&origin=recordpage
U2 - 10.1016/j.neucom.2023.03.070
DO - 10.1016/j.neucom.2023.03.070
M3 - RGC 21 - Publication in refereed journal
SN - 0925-2312
VL - 537
SP - 198
EP - 209
JO - Neurocomputing
JF - Neurocomputing
ER -