Multi-agent Reinforcement Learning Aided Sampling Algorithms for a Class of Multiscale Inverse Problems

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

1 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Article number55
Journal / PublicationJournal of Scientific Computing
Volume96
Issue number2
Online published3 Jul 2023
Publication statusPublished - Aug 2023

Abstract

In this work, we formulate a class of multiscale inverse problems within the framework of reinforcement learning (RL) and solve it by a sampling method. We propose a multi-agent actor-critic RL algorithm to accelerate the multi-level Monte Carlo Markov Chain (MCMC) sampling once the problem is formulated as an RL process. The policies of the agents are used to generate proposals in the MCMC steps, and the critic, which is centralized, is in charge of estimating the expected reward. There are several difficulties in the implementation of the inverse problem involving features of multiple scales by using traditional MCMC sampling. Firstly, the computation of the posterior distribution involves evaluating the forward solver, which is time-consuming for problems with heterogeneities. This motivates to use the type of multi-level algorithms. Secondly, it is hard to find a proper transition function. To overcome these issues, we learn an RL policy as the proposal generator. We verify our proposed algorithm by solving different benchmark cases of multiscale inverse problems. Our experiments show that the proposed method improves the sampling process and speeds up the residual convergence. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023

Research Area(s)

  • Multiscale, Inverse problem, Reinforcement learning, MCMC