Fast Haar Transforms for Graph Neural Networks

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

53 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)188-198
Journal / PublicationNeural Networks
Volume128
Online published4 May 2020
Publication statusPublished - Aug 2020

Abstract

Graph Neural Networks (GNNs) have become a topic of intense research recently due to their powerful capability in high-dimensional classification and regression tasks for graph-structured data. However, as GNNs typically define the graph convolution by the orthonormal basis for the graph Laplacian, they suffer from high computational cost when the graph size is large. This paper introduces a Haar basis, which is a sparse and localized orthonormal system for a coarse-grained chain on the graph. The graph convolution under Haar basis, called Haar convolution, can be defined accordingly for GNNs. The sparsity and locality of the Haar basis allow Fast Haar Transforms (FHTs) on the graph, by which one then achieves a fast evaluation of Haar convolution between graph data and filters. We conduct experiments on GNNs equipped with Haar convolution, which demonstrates state-of-the-art results on graph-based regression and node classification tasks.

Research Area(s)

  • Graph Neural Networks, Haar basis, Graph convolution, Fast Haar Transforms, Geometric deep learning, Graph Laplacian

Citation Format(s)

Fast Haar Transforms for Graph Neural Networks. / Li, Ming; Ma, Zheng; Wang, Yu Guang et al.
In: Neural Networks, Vol. 128, 08.2020, p. 188-198.

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review