TY - JOUR
T1 - AMS-NET
T2 - ADAPTIVE MULTISCALE SPARSE NEURAL NETWORK WITH INTERPRETABLE BASIS EXPANSION FOR MULTIPHASE FLOW PROBLEMS
AU - WANG, Yating
AU - LEUNG, Wing Tat
AU - LIN, Guang
PY - 2022
Y1 - 2022
N2 - In this work, we propose an adaptive sparse learning algorithm that can be applied to learn the physical processes and obtain a sparse representation of the solution given a large snapshot space. Assume that there is a rich class of precomputed basis functions that can be used to approximate the quantity of interest. For instance, in the simulation of a multiscale flow system, one can adopt mixed multiscale methods to compute velocity bases from local problems and apply the proper orthogonal decomposition method to construct bases for the saturation equation. We then design a neural network architecture to learn the coefficients of solutions in the spaces which are spanned by these basis functions. The information of the basis functions is incorporated in the loss function, which minimizes the differences between the downscaled reduced order solutions and reference solutions at multiple time steps. The network contains multiple submodules and the solutions at different time steps can be learned simultaneously. We propose some strategies in the learning framework to identify important degrees of freedom. To find a sparse solution representation, a soft-thresholding operator is applied to enforce the sparsity of the output coefficient vectors of the neural network. To avoid oversimplification and enrich the approximation space, some degrees of freedom can be added back to the systems through a greedy algorithm. In both scenarios, that is, removing and adding degrees of freedom, the corresponding network connections are pruned or reactivated guided by the magnitude of the solution coefficients obtained from the network outputs. The proposed adaptive learning process is applied to some toy case examples to demonstrate that it can achieve a good basis selection and accurate approximation. More numerical tests are successfully performed on two-phase multiscale flow problems to show the capability and interpretability of the proposed method on complicated applications.
AB - In this work, we propose an adaptive sparse learning algorithm that can be applied to learn the physical processes and obtain a sparse representation of the solution given a large snapshot space. Assume that there is a rich class of precomputed basis functions that can be used to approximate the quantity of interest. For instance, in the simulation of a multiscale flow system, one can adopt mixed multiscale methods to compute velocity bases from local problems and apply the proper orthogonal decomposition method to construct bases for the saturation equation. We then design a neural network architecture to learn the coefficients of solutions in the spaces which are spanned by these basis functions. The information of the basis functions is incorporated in the loss function, which minimizes the differences between the downscaled reduced order solutions and reference solutions at multiple time steps. The network contains multiple submodules and the solutions at different time steps can be learned simultaneously. We propose some strategies in the learning framework to identify important degrees of freedom. To find a sparse solution representation, a soft-thresholding operator is applied to enforce the sparsity of the output coefficient vectors of the neural network. To avoid oversimplification and enrich the approximation space, some degrees of freedom can be added back to the systems through a greedy algorithm. In both scenarios, that is, removing and adding degrees of freedom, the corresponding network connections are pruned or reactivated guided by the magnitude of the solution coefficients obtained from the network outputs. The proposed adaptive learning process is applied to some toy case examples to demonstrate that it can achieve a good basis selection and accurate approximation. More numerical tests are successfully performed on two-phase multiscale flow problems to show the capability and interpretability of the proposed method on complicated applications.
KW - adaptive method
KW - deep neural network
KW - interpretable machine learning
KW - model reduction
KW - multiscale flow dynamics
KW - sparse learning
UR - http://www.scopus.com/inward/record.url?scp=85135414789&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85135414789&origin=recordpage
U2 - 10.1137/21M1405289
DO - 10.1137/21M1405289
M3 - RGC 21 - Publication in refereed journal
SN - 1540-3459
VL - 20
SP - 618
EP - 640
JO - Multiscale Modeling and Simulation
JF - Multiscale Modeling and Simulation
IS - 2
ER -