Abstract
Kernels effectively represent nonlocal dependencies and are extensively employed in formulating operators between function spaces. Thus, learning kernels in operators from data is an inverse problem of general interest. Due to the nonlocal dependence, the inverse problem is often severely ill-posed with a data-dependent normal operator. Traditional Bayesian methods address the ill-posedness by a non-degenerate prior, which may result in an unstable posterior mean in the small noise regime, especially when data induces a perturbation in the null space of the normal operator. We propose a new data-adaptive Reproducing Kernel Hilbert Space (RKHS) prior, which ensures the stability of the posterior mean in the small noise regime. We analyze this adaptive prior and showcase its efficacy through applications on Toeplitz matrices and integral operators. Numerical experiments reveal that fixed non-degenerate priors can produce divergent posterior means under errors from discretization, model inaccuracies, partial observations, or erroneous noise assumptions. In contrast, our data-adaptive RKHS prior consistently yields convergent posterior means. ©2024 Neil K. Chada, Quanjun Lang, Fei Lu and Xiong Wang.
Original language | English |
---|---|
Article number | 317 |
Number of pages | 37 |
Journal | Journal of Machine Learning Research |
Volume | 25 |
Publication status | Online published - Oct 2024 |
Funding
The work of FL is funded by the Johns Hopkins University Catalyst Award, FA9550-20-1-0288, and NSF-DMS-2238486. XW is supported by the Johns Hopkins University research fund.
Research Keywords
- Data-adaptive prior
- kernels in operators
- linear Bayesian inverse problem
- RKHS
- Tikhonov regularization