AMS-NET : ADAPTIVE MULTISCALE SPARSE NEURAL NETWORK WITH INTERPRETABLE BASIS EXPANSION FOR MULTIPHASE FLOW PROBLEMS

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

1 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Pages (from-to)618-640
Journal / PublicationMultiscale Modeling and Simulation
Volume20
Issue number2
Online published23 Jun 2022
Publication statusPublished - 2022
Externally publishedYes

Abstract

In this work, we propose an adaptive sparse learning algorithm that can be applied to learn the physical processes and obtain a sparse representation of the solution given a large snapshot space. Assume that there is a rich class of precomputed basis functions that can be used to approximate the quantity of interest. For instance, in the simulation of a multiscale flow system, one can adopt mixed multiscale methods to compute velocity bases from local problems and apply the proper orthogonal decomposition method to construct bases for the saturation equation. We then design a neural network architecture to learn the coefficients of solutions in the spaces which are spanned by these basis functions. The information of the basis functions is incorporated in the loss function, which minimizes the differences between the downscaled reduced order solutions and reference solutions at multiple time steps. The network contains multiple submodules and the solutions at different time steps can be learned simultaneously. We propose some strategies in the learning framework to identify important degrees of freedom. To find a sparse solution representation, a soft-thresholding operator is applied to enforce the sparsity of the output coefficient vectors of the neural network. To avoid oversimplification and enrich the approximation space, some degrees of freedom can be added back to the systems through a greedy algorithm. In both scenarios, that is, removing and adding degrees of freedom, the corresponding network connections are pruned or reactivated guided by the magnitude of the solution coefficients obtained from the network outputs. The proposed adaptive learning process is applied to some toy case examples to demonstrate that it can achieve a good basis selection and accurate approximation. More numerical tests are successfully performed on two-phase multiscale flow problems to show the capability and interpretability of the proposed method on complicated applications.

Research Area(s)

  • adaptive method, deep neural network, interpretable machine learning, model reduction, multiscale flow dynamics, sparse learning