A discretization-invariant extension and analysis of some deep operator networks

Zecheng Zhang, Wing Tat Leung*, Hayden Schaeffer

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

2 Citations (Scopus)

Abstract

We present a generalized version of the discretization-invariant neural operator in Zhang et al. (2022) and prove that the network is a universal approximation in the operator sense. Moreover, by incorporating additional terms in the architecture, we establish a connection between this discretization-invariant neural operator network and those discussed in Chen and Chen (1995) and Lu et al. (2021). The discretization-invariance property of the operator network implies that different input functions can be sampled using various sensor locations within the same training and testing phases. Additionally, since the network learns a "basis" for the input and output function spaces, our approach enables the evaluation of input functions on different discretizations. To evaluate the performance of the proposed discretization-invariant neural operator, we focus on challenging examples from multiscale partial differential equations. Our experimental results indicate that the method achieves lower prediction errors compared to previous networks and benefits from its discretization-invariant property. © 2024 Elsevier B.V.
Original languageEnglish
Article number116226
Number of pages13
JournalJournal of Computational and Applied Mathematics
Volume456
Online published20 Aug 2024
DOIs
Publication statusPublished - 1 Mar 2025

Research Keywords

  • Operator learning
  • Parametric partial differential equation
  • Multiscale problem
  • Deep neural network
  • Scientific machine learning

Fingerprint

Dive into the research topics of 'A discretization-invariant extension and analysis of some deep operator networks'. Together they form a unique fingerprint.

Cite this