A multi-stage deep learning based algorithm for multiscale model reduction

Eric Chung*, Wing Tat Leung, Sai-Mang Pun, Zecheng Zhang

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

13 Citations (Scopus)

Abstract

In this work, we propose a multi-stage training strategy for the development of deep learning algorithms applied to problems with multiscale features. Each stage of the proposed strategy shares an (almost) identical network structure and predicts the same reduced order model of the multiscale problem. The output of the previous stage will be combined with an intermediate layer for the current stage. We numerically show that using different reduced order models as inputs of each stage can improve the training and we propose several ways of adding different information into the systems. These methods include mathematical multiscale model reductions and network approaches; but we found that the mathematical approach is a systematical way of decoupling information and gives the best result. We finally verified our training methodology on a time dependent nonlinear problem and a steady state model.
Original languageEnglish
Article number113506
JournalJournal of Computational and Applied Mathematics
Volume394
Online published27 Feb 2021
DOIs
Publication statusPublished - 1 Oct 2021
Externally publishedYes

Research Keywords

  • Deep learning
  • Multiscale model reduction

Fingerprint

Dive into the research topics of 'A multi-stage deep learning based algorithm for multiscale model reduction'. Together they form a unique fingerprint.

Cite this