Deep Attention-guided Graph Clustering with Dual Self-supervision

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

22 Scopus Citations
View graph of relations

Detail(s)

Original languageEnglish
Pages (from-to)3296-3307
Number of pages13
Journal / PublicationIEEE Transactions on Circuits and Systems for Video Technology
Volume33
Issue number7
Online published27 Dec 2022
Publication statusPublished - Jul 2023

Abstract

Existing deep embedding clustering methods fail to sufficiently utilize the available off-the-shelf information from feature embeddings and cluster assignments, limiting their performance. To this end, we propose a novel method, namely deep attention-guided graph clustering with dual self-supervision (DAGC). Specifically, DAGC first utilizes a heterogeneity-wise fusion module to adaptively integrate the features of the auto-encoder and the graph convolutional network in each layer and then uses a scale-wise fusion module to dynamically concatenate the multi-scale features in different layers. Such modules are capable of learning an informative feature embedding via an attention-based mechanism. In addition, we design a distribution-wise fusion module that leverages cluster assignments to acquire clustering results directly. To better explore the off-the-shelf information from the cluster assignments, we develop a dual self-supervision solution consisting of a soft self-supervision strategy with a Kullback-Leibler divergence loss and a hard self-supervision strategy with a pseudo supervision loss. Extensive experiments on nine benchmark datasets validate that our method consistently outperforms state-of-the-art methods. Especially, our method improves the ARI by more than 10.29% over the best baseline. The code will be publicly available at https://github.com/ZhihaoPENG-CityU/DAGC. © 2022 IEEE.

Research Area(s)

  • Unsupervised learning, deep embedding clustering, feature fusion, self-supervision

Bibliographic Note

Research Unit(s) information for this publication is provided by the author(s) concerned.