Accelerating Monte Carlo Bayesian Prediction via Approximating Predictive Uncertainty Over the Simplex
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Pages (from-to) | 1492-1506 |
Journal / Publication | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 33 |
Issue number | 4 |
Online published | 23 Dec 2020 |
Publication status | Published - Apr 2022 |
Link(s)
DOI | DOI |
---|---|
Attachment(s) | Documents
Publisher's Copyright Statement
|
Link to Scopus | https://www.scopus.com/record/display.uri?eid=2-s2.0-85098798288&origin=recordpage |
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(35db507c-5118-43bc-9e87-2efbcd57fb8a).html |
Abstract
Estimating the predictive uncertainty of a Bayesian learning model is critical in various decision-making problems, e.g., reinforcement learning, detecting the adversarial attack, self-driving car. As the model posterior is almost always intractable, most efforts were made on finding an accurate approximation to the true posterior. Even though a decent estimation of the model posterior is obtained, another approximation is required to compute the predictive distribution over the desired output. A common accurate solution is to use Monte Carlo (MC) integration. However, it needs to maintain a large number of samples, evaluate the model repeatedly and average multiple model outputs. In many real world cases, this is computationally prohibitive. In this work, assuming that the exact posterior or a decent approximation is obtained, we propose a generic framework to approximate the output probability distribution induced by the model posterior with a parameterized model and in an amortized fashion. The aim is to approximate the predictive uncertainty of a specific Bayesian model, meanwhile alleviating the heavy workload of MC integration at testing time. The proposed method is universally applicable to Bayesian classification models that allow for posterior sampling. Theoretically, we show that the idea of amortization incurs no additional costs on approximation performance. Empirical results validate the strong practical performance of our approach.
Research Area(s)
- Bayes method, deep neural network (NN), knowledge distillation, predictive uncertainty.
Citation Format(s)
Accelerating Monte Carlo Bayesian Prediction via Approximating Predictive Uncertainty Over the Simplex. / Cui, Yufei; Yao, Wuguannan; Li, Qiao et al.
In: IEEE Transactions on Neural Networks and Learning Systems, Vol. 33, No. 4, 04.2022, p. 1492-1506.
In: IEEE Transactions on Neural Networks and Learning Systems, Vol. 33, No. 4, 04.2022, p. 1492-1506.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Download Statistics
No data available