TY - JOUR
T1 - A discretization-invariant extension and analysis of some deep operator networks
AU - Zhang, Zecheng
AU - Leung, Wing Tat
AU - Schaeffer, Hayden
PY - 2025/3/1
Y1 - 2025/3/1
N2 - We present a generalized version of the discretization-invariant neural operator in Zhang et al. (2022) and prove that the network is a universal approximation in the operator sense. Moreover, by incorporating additional terms in the architecture, we establish a connection between this discretization-invariant neural operator network and those discussed in Chen and Chen (1995) and Lu et al. (2021). The discretization-invariance property of the operator network implies that different input functions can be sampled using various sensor locations within the same training and testing phases. Additionally, since the network learns a "basis" for the input and output function spaces, our approach enables the evaluation of input functions on different discretizations. To evaluate the performance of the proposed discretization-invariant neural operator, we focus on challenging examples from multiscale partial differential equations. Our experimental results indicate that the method achieves lower prediction errors compared to previous networks and benefits from its discretization-invariant property. © 2024 Elsevier B.V.
AB - We present a generalized version of the discretization-invariant neural operator in Zhang et al. (2022) and prove that the network is a universal approximation in the operator sense. Moreover, by incorporating additional terms in the architecture, we establish a connection between this discretization-invariant neural operator network and those discussed in Chen and Chen (1995) and Lu et al. (2021). The discretization-invariance property of the operator network implies that different input functions can be sampled using various sensor locations within the same training and testing phases. Additionally, since the network learns a "basis" for the input and output function spaces, our approach enables the evaluation of input functions on different discretizations. To evaluate the performance of the proposed discretization-invariant neural operator, we focus on challenging examples from multiscale partial differential equations. Our experimental results indicate that the method achieves lower prediction errors compared to previous networks and benefits from its discretization-invariant property. © 2024 Elsevier B.V.
KW - Operator learning
KW - Parametric partial differential equation
KW - Multiscale problem
KW - Deep neural network
KW - Scientific machine learning
UR - https://www.webofscience.com/wos/woscc/full-record/WOS:001301093300001
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85201619350&origin=recordpage
UR - http://www.scopus.com/inward/record.url?scp=85201619350&partnerID=8YFLogxK
U2 - 10.1016/j.cam.2024.116226
DO - 10.1016/j.cam.2024.116226
M3 - RGC 21 - Publication in refereed journal
SN - 0377-0427
VL - 456
JO - Journal of Computational and Applied Mathematics
JF - Journal of Computational and Applied Mathematics
M1 - 116226
ER -