TY - JOUR
T1 - BelNet
T2 - basis enhanced learning, a mesh-free neural operator
AU - Zhang, Zecheng
AU - Leung, Wing Tat
AU - Schaeffer, Hayden
PY - 2023/8
Y1 - 2023/8
N2 - Operator learning trains a neural network to map functions to functions. An ideal operator learning framework should be mesh-free in the sense that the training does not require a particular choice of discretization for the input functions, allows for the input and output functions to be on different domains, and is able to have different grids between samples. We propose a mesh-free neural operator for solving parametric partial differential equations. The basis enhanced learning network (BelNet) projects the input function into a latent space and reconstructs the output functions. In particular, we construct part of the network to learn the 'basis' functions in the training process. This generalized the networks proposed in Chen & Chen (Chen and Chen 1995 IEEE Trans. Neural Netw. 49, 911-917. (doi:10.1109/72.392253) and 6, 904-910. (doi:10.1109/IJCNN.1993.716815)) to account for differences in input and output meshes. Through several challenging high-contrast and multiscale problems, we show that our approach outperforms other operator learning methods for these tasks and allows for more freedom in the sampling and/or discretization process. © 2023 The Author(s) Published by the Royal Society. All rights reserved.
AB - Operator learning trains a neural network to map functions to functions. An ideal operator learning framework should be mesh-free in the sense that the training does not require a particular choice of discretization for the input functions, allows for the input and output functions to be on different domains, and is able to have different grids between samples. We propose a mesh-free neural operator for solving parametric partial differential equations. The basis enhanced learning network (BelNet) projects the input function into a latent space and reconstructs the output functions. In particular, we construct part of the network to learn the 'basis' functions in the training process. This generalized the networks proposed in Chen & Chen (Chen and Chen 1995 IEEE Trans. Neural Netw. 49, 911-917. (doi:10.1109/72.392253) and 6, 904-910. (doi:10.1109/IJCNN.1993.716815)) to account for differences in input and output meshes. Through several challenging high-contrast and multiscale problems, we show that our approach outperforms other operator learning methods for these tasks and allows for more freedom in the sampling and/or discretization process. © 2023 The Author(s) Published by the Royal Society. All rights reserved.
KW - discretization invariant
KW - multiscale problem
KW - operator learning
KW - partial differential equation
UR - http://www.scopus.com/inward/record.url?scp=85171663417&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85171663417&origin=recordpage
U2 - 10.1098/rspa.2023.0043
DO - 10.1098/rspa.2023.0043
M3 - RGC 21 - Publication in refereed journal
SN - 1364-5021
VL - 479
JO - Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences
JF - Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences
IS - 2276
M1 - 20230043
ER -