A Data-Adaptive RKHS Prior for Bayesian Learning of Kernels in Operators

Neil K. Chada, Quanjun Lang, Fei Lu, Xiong Wang

Research output: Journal Publications and ReviewsComment/debatepeer-review

Abstract

Kernels effectively represent nonlocal dependencies and are extensively employed in formulating operators between function spaces. Thus, learning kernels in operators from data is an inverse problem of general interest. Due to the nonlocal dependence, the inverse problem is often severely ill-posed with a data-dependent normal operator. Traditional Bayesian methods address the ill-posedness by a non-degenerate prior, which may result in an unstable posterior mean in the small noise regime, especially when data induces a perturbation in the null space of the normal operator. We propose a new data-adaptive Reproducing Kernel Hilbert Space (RKHS) prior, which ensures the stability of the posterior mean in the small noise regime. We analyze this adaptive prior and showcase its efficacy through applications on Toeplitz matrices and integral operators. Numerical experiments reveal that fixed non-degenerate priors can produce divergent posterior means under errors from discretization, model inaccuracies, partial observations, or erroneous noise assumptions. In contrast, our data-adaptive RKHS prior consistently yields convergent posterior means. ©2024 Neil K. Chada, Quanjun Lang, Fei Lu and Xiong Wang.
Original languageEnglish
Article number317
Number of pages37
JournalJournal of Machine Learning Research
Volume25
Publication statusOnline published - Oct 2024

Funding

The work of FL is funded by the Johns Hopkins University Catalyst Award, FA9550-20-1-0288, and NSF-DMS-2238486. XW is supported by the Johns Hopkins University research fund.

Research Keywords

  • Data-adaptive prior
  • kernels in operators
  • linear Bayesian inverse problem
  • RKHS
  • Tikhonov regularization

Fingerprint

Dive into the research topics of 'A Data-Adaptive RKHS Prior for Bayesian Learning of Kernels in Operators'. Together they form a unique fingerprint.

Cite this