Adaptive Representation Learning in Unsupervised Deep Graph Clustering
無監督深度圖聚類中的自適應表示學習
Student thesis: Doctoral Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution | |
---|---|
Supervisors/Advisors |
|
Award date | 4 Sept 2023 |
Link(s)
Permanent Link | https://scholars.cityu.edu.hk/en/theses/theses(dec53e32-df33-4d0c-8b43-e8c79f22359f).html |
---|---|
Other link(s) | Links |
Abstract
Clustering is a primary yet challenging task in data analysis, aiming to partition similar samples into the same group and dissimilar samples into different groups. Recently, benefiting from the breakthroughs in unsupervised deep graph clustering, the combination of the traditional convolutional neural network (e.g., auto-encoder (AE)) and the graph convolutional network (GCN) has achieved state-of-the-art performance, in which the auto-encoder extracts the node attribute feature and the graph convolutional network captures the topological graph feature. However, existing methods do not sufficiently utilize the available off-the-shelf information from feature embeddings and cluster assignments, limiting their performance. In view of the limitation, this thesis focuses on adaptive representation learning in unsupervised deep graph clustering. The main works involve adaptive embedding\assignment\graph representation learning, which can be summarized as follows:
First, we propose a novel unsupervised deep graph clustering method named attention-driven graph clustering network (AGCN) to address the following issues: the existing works (𝑖) lack a flexible combination mechanism to adaptively fuse two kinds of features from the auto-encoder and the graph convolutional network to boost the representation learning capability and (𝑖𝑖) overlook the multi-scale information embedded at different layers for subsequent clustering assignment, leading to inferior clustering results. Specifically, AGCN mainly includes two attention-driven feature fusion modules, namely AGCN heterogeneity-wise fusion module (AGCN-H) and AGCN scale-wise fusion module (AGCN-S), in which both modules exploit the attention-based mechanism to dynamically measure the importance of the corresponding features for the subsequent feature fusion. AGCN-H adaptively merges the auto-encoder feature and the graph convolutional network feature from the same layer, while AGCN-S dynamically concatenates the multi-scale features from different layers. For conducting the training process in an unsupervised fashion, we design a unified learning framework capable of directly producing the clustering assignment results. Compared with the existing unsupervised deep graph clustering methods, our method is more flexible and effective since it considers the numerous and discriminative information embedded in the network to adaptively learn the embedding representation. Extensive quantitative and qualitative results on commonly used benchmark datasets validate that our AGCN consistently outperforms state-of-the-art methods. Furthermore, a series of ablation studies are performed to validate the efficiency and effectiveness of our approach.
Nevertheless, the above-proposed model suffers from the decision-making dilemma concerning two learned probability distributions from the auto-encoder and graph convolutional network, i.e., which one should be selected as the final clustering assignment result. To the best of our knowledge, this is an unsolved issue commonly existing in the previous unsupervised deep graph clustering approaches. To handle this challenge, we propose a novel method, namely deep attention-guided graph clustering with dual self-supervision (DAGC). Specifically, we design a distribution-wise fusion module that leverages those two kinds of clustering assignments to adaptively learn the assignment representation, acquiring final clustering results. To better explore the off-the-shelf information from the clustering assignments, we develop a dual self-supervision solution consisting of a soft self-supervision strategy with a Kullback-Leibler divergence loss and a hard self-supervision strategy with a pseudo supervision loss to guide the overall network training. The quantitative and qualitative experiments and analyses on nine benchmark datasets demonstrate that our method consistently outperforms state-of-the-art approaches. In addition, we provided ablation studies and visualizations to validate the effectiveness and advantage of the DAGC network.
Moreover, existing GCN-based graph clustering networks heavily rely on a predefined graph and may fail if the initial graph cannot truly and precisely reflect their topology structures on the embedding space. To address the aforementioned problem, we propose a novel embedding-induced graph refinement clustering network (EGRC-Net) capable of adaptively using the learned embedding to develop the initial graph to achieve better clustering performance. Specifically, we first utilize the vanilla autoencoder and graph convolution network modules to adaptively integrate the node attribute and topology structure information to learn the latent feature representation. Then, we explore the geometric structure information on the embedding space to construct an adjacency graph and subsequently develop a graph fusion architecture to fuse that graph with the initial one dynamically. Finally, we minimize the Jeffreys divergence loss function between multiple derived distributions to conduct network training in an unsupervised fashion. Extensive experiments on seven commonly used benchmark datasets demonstrate that the proposed method consistently outperforms several state-of-the-art approaches.
Finally, the significance of these works is highlighted with the following statement: adaptive representation learning in unsupervised deep graph clustering is a critical component of artificial general intelligence, since it enables machines to learn complex patterns and relationships from data without human annotations. By leveraging unsupervised adaptive representation learning techniques, the artificial general intelligence system can develop its internal representation capability of data and adapt to new environments, tasks, and situations. This capability is essential for making sense of complex and unstructured data. Last but not least, unsupervised adaptive representation learning is a key step towards the development of machines that can conduct the study and inference like humans, opening new possibilities for AI in a wide range of domains, from healthcare to finance to entertainment. In the future, we will contribute to the artificial general intelligence community by continuing the investigation of large-scale datasets, advanced representation learning, and efficient information propagation.
First, we propose a novel unsupervised deep graph clustering method named attention-driven graph clustering network (AGCN) to address the following issues: the existing works (𝑖) lack a flexible combination mechanism to adaptively fuse two kinds of features from the auto-encoder and the graph convolutional network to boost the representation learning capability and (𝑖𝑖) overlook the multi-scale information embedded at different layers for subsequent clustering assignment, leading to inferior clustering results. Specifically, AGCN mainly includes two attention-driven feature fusion modules, namely AGCN heterogeneity-wise fusion module (AGCN-H) and AGCN scale-wise fusion module (AGCN-S), in which both modules exploit the attention-based mechanism to dynamically measure the importance of the corresponding features for the subsequent feature fusion. AGCN-H adaptively merges the auto-encoder feature and the graph convolutional network feature from the same layer, while AGCN-S dynamically concatenates the multi-scale features from different layers. For conducting the training process in an unsupervised fashion, we design a unified learning framework capable of directly producing the clustering assignment results. Compared with the existing unsupervised deep graph clustering methods, our method is more flexible and effective since it considers the numerous and discriminative information embedded in the network to adaptively learn the embedding representation. Extensive quantitative and qualitative results on commonly used benchmark datasets validate that our AGCN consistently outperforms state-of-the-art methods. Furthermore, a series of ablation studies are performed to validate the efficiency and effectiveness of our approach.
Nevertheless, the above-proposed model suffers from the decision-making dilemma concerning two learned probability distributions from the auto-encoder and graph convolutional network, i.e., which one should be selected as the final clustering assignment result. To the best of our knowledge, this is an unsolved issue commonly existing in the previous unsupervised deep graph clustering approaches. To handle this challenge, we propose a novel method, namely deep attention-guided graph clustering with dual self-supervision (DAGC). Specifically, we design a distribution-wise fusion module that leverages those two kinds of clustering assignments to adaptively learn the assignment representation, acquiring final clustering results. To better explore the off-the-shelf information from the clustering assignments, we develop a dual self-supervision solution consisting of a soft self-supervision strategy with a Kullback-Leibler divergence loss and a hard self-supervision strategy with a pseudo supervision loss to guide the overall network training. The quantitative and qualitative experiments and analyses on nine benchmark datasets demonstrate that our method consistently outperforms state-of-the-art approaches. In addition, we provided ablation studies and visualizations to validate the effectiveness and advantage of the DAGC network.
Moreover, existing GCN-based graph clustering networks heavily rely on a predefined graph and may fail if the initial graph cannot truly and precisely reflect their topology structures on the embedding space. To address the aforementioned problem, we propose a novel embedding-induced graph refinement clustering network (EGRC-Net) capable of adaptively using the learned embedding to develop the initial graph to achieve better clustering performance. Specifically, we first utilize the vanilla autoencoder and graph convolution network modules to adaptively integrate the node attribute and topology structure information to learn the latent feature representation. Then, we explore the geometric structure information on the embedding space to construct an adjacency graph and subsequently develop a graph fusion architecture to fuse that graph with the initial one dynamically. Finally, we minimize the Jeffreys divergence loss function between multiple derived distributions to conduct network training in an unsupervised fashion. Extensive experiments on seven commonly used benchmark datasets demonstrate that the proposed method consistently outperforms several state-of-the-art approaches.
Finally, the significance of these works is highlighted with the following statement: adaptive representation learning in unsupervised deep graph clustering is a critical component of artificial general intelligence, since it enables machines to learn complex patterns and relationships from data without human annotations. By leveraging unsupervised adaptive representation learning techniques, the artificial general intelligence system can develop its internal representation capability of data and adapt to new environments, tasks, and situations. This capability is essential for making sense of complex and unstructured data. Last but not least, unsupervised adaptive representation learning is a key step towards the development of machines that can conduct the study and inference like humans, opening new possibilities for AI in a wide range of domains, from healthcare to finance to entertainment. In the future, we will contribute to the artificial general intelligence community by continuing the investigation of large-scale datasets, advanced representation learning, and efficient information propagation.