Skip to main navigation Skip to search Skip to main content

Fast Haar Transforms for Graph Neural Networks

Ming Li, Zheng Ma, Yu Guang Wang*, Xiaosheng Zhuang

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

Graph Neural Networks (GNNs) have become a topic of intense research recently due to their powerful capability in high-dimensional classification and regression tasks for graph-structured data. However, as GNNs typically define the graph convolution by the orthonormal basis for the graph Laplacian, they suffer from high computational cost when the graph size is large. This paper introduces a Haar basis, which is a sparse and localized orthonormal system for a coarse-grained chain on the graph. The graph convolution under Haar basis, called Haar convolution, can be defined accordingly for GNNs. The sparsity and locality of the Haar basis allow Fast Haar Transforms (FHTs) on the graph, by which one then achieves a fast evaluation of Haar convolution between graph data and filters. We conduct experiments on GNNs equipped with Haar convolution, which demonstrates state-of-the-art results on graph-based regression and node classification tasks.
Original languageEnglish
Pages (from-to)188-198
JournalNeural Networks
Volume128
Online published4 May 2020
DOIs
Publication statusPublished - Aug 2020

Research Keywords

  • Graph Neural Networks
  • Haar basis
  • Graph convolution
  • Fast Haar Transforms
  • Geometric deep learning
  • Graph Laplacian

Fingerprint

Dive into the research topics of 'Fast Haar Transforms for Graph Neural Networks'. Together they form a unique fingerprint.

Cite this