Superpixel Graph Contrastive Clustering with Semantic-Invariant Augmentations for Hyperspectral Images

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Journal / PublicationIEEE Transactions on Circuits and Systems for Video Technology
Publication statusOnline published - 24 Jun 2024

Abstract

Hyperspectral images (HSI) clustering is an important but challenging task. The state-of-the-art (SOTA) methods usually rely on superpixels, however, they do not fully utilize the spatial and spectral information in HSI 3-D structure, and their optimization targets are not clustering-oriented. In this work, we first use 3-D and 2-D hybrid convolutional neural networks to extract the high-order spatial and spectral features of HSI through pre-training, and then design a superpixel graph contrastive clustering (SPGCC) model to learn discriminative superpixel representations. Reasonable augmented views are crucial for contrastive clustering, and conventional contrastive learning may hurt the cluster structure since different samples are pushed away in the embedding space even if they belong to the same class. In SPGCC, we design two semantic-invariant data augmentations for HSI superpixels: pixel sampling augmentation and model weight augmentation. Then sample-level alignment and clustering-center-level contrast are performed for better intra-class similarity and inter-class dissimilarity of superpixel embeddings. We perform clustering and network optimization alternatively. Experimental results on several HSI datasets verify the advantages of the proposed SPGCC compared to SOTA methods. Our code is available at https://github.com/jhqi/spgcc.

© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.

Research Area(s)

  • Hyperspectral image, superpixel, graph learning, contrastive learning, clustering

Bibliographic Note

Research Unit(s) information for this publication is provided by the author(s) concerned.