Abstract
In this work, we formulate a class of multiscale inverse problems within the framework of reinforcement learning (RL) and solve it by a sampling method. We propose a multi-agent actor-critic RL algorithm to accelerate the multi-level Monte Carlo Markov Chain (MCMC) sampling once the problem is formulated as an RL process. The policies of the agents are used to generate proposals in the MCMC steps, and the critic, which is centralized, is in charge of estimating the expected reward. There are several difficulties in the implementation of the inverse problem involving features of multiple scales by using traditional MCMC sampling. Firstly, the computation of the posterior distribution involves evaluating the forward solver, which is time-consuming for problems with heterogeneities. This motivates to use the type of multi-level algorithms. Secondly, it is hard to find a proper transition function. To overcome these issues, we learn an RL policy as the proposal generator. We verify our proposed algorithm by solving different benchmark cases of multiscale inverse problems. Our experiments show that the proposed method improves the sampling process and speeds up the residual convergence. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023
| Original language | English |
|---|---|
| Article number | 55 |
| Journal | Journal of Scientific Computing |
| Volume | 96 |
| Issue number | 2 |
| Online published | 3 Jul 2023 |
| DOIs | |
| Publication status | Published - Aug 2023 |
Funding
The research of Eric Chung is partially supported by the Hong Kong RGC General Research Fund (Project numbers 14304021 and 14302620) and CUHK Faculty of Science Direct Grant 2020-21. The research of Sai-Mang Pun is partially supported by National Science Foundation (DMS-2208498).
Research Keywords
- Multiscale
- Inverse problem
- Reinforcement learning
- MCMC