Approximate Inference Using Simplification of Gaussian Mixture Models
基於簡化高斯混合模型的近似推斷
Student thesis: Doctoral Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution  

Supervisors/Advisors 

Award date  11 Dec 2018 
Link(s)
Permanent Link  https://scholars.cityu.edu.hk/en/theses/theses(0ebebbab818e4b60966afd9e63ff85e1).html 

Other link(s)  Links 
Abstract
Probabilistic modelling and Bayesian inference are principal tools which are widely used in machine learning. An assumption of simple probabilistic models can lead to tractable and efficient computation for inference, but usually it cannot fulfill the requirement of most cases of interest in real world applications.
Thus, approximate inference is important for probabilistic modelling with more diverse models that have intractable inference.
Monte Carlo (MC) sampling and variational approximation are the two mainstay approaches for approximate inference. MC sampling approximates the intractable probability distribution with samples, which can be accurate but time consuming. Variational approximation supposes an unknown variational distribution from a tractable family then determines it by solving an optimization problem, which is efficient but tedious mathematical derivation is required for different models.
Since finite mixture models are universal approximators for any continuous probability density, we study the use of Gaussian Mixture Models (GMMs) in Bayesian inference in this dissertation. Firstly, we propose a novel density simplification algorithm that can well preserve the original mixture distribution by directly grouping the base probability densities. We derive the new densitypreserving Hierarchical EM (DPHEM) algorithm from first principles, which is based on a variational approximation to the expected loglikelihood between two mixture models. In particular, in order to develop a complete framework for recursive Bayesian filtering, we propose an algorithm to approximate an arbitrary likelihood function as sums of scaled Gaussians.
Applications on recursive Bayesian filtering, KDE mixture reduction and belief propagation show that the proposed algorithm can be widely used for probabilistic data analysis and is more accurate than other mixture simplification methods.
Then, we propose an approximate inference algorithm for robust Gaussian Process (GP) regression with GMMs representing arbitrary nonGaussian observation likelihoods by using DPHEM. In addition to the latent function variables, we introduce another set of multinomial latent variables which indicates the mixture component generating each noisy observation. Then the hyperparameters in GP kernel function and GMM likelihood are learnt by maximizing the expected complete data loglikelihood under the posterior distribution of all latent variables. In order to compute this expectation in a well represented closedform, we propose an algorithm to approximate the posterior process of latent function by hierarchically combining posteriors on incremental subsets of the data, which can also be used for prediction on unseen inputs. The proposed algorithm is more succinct and experiments on synthetic and real data show that it is also more applicable to many cases where complicated mathematical derivations are required for other variational approximate algorithms.
Thus, approximate inference is important for probabilistic modelling with more diverse models that have intractable inference.
Monte Carlo (MC) sampling and variational approximation are the two mainstay approaches for approximate inference. MC sampling approximates the intractable probability distribution with samples, which can be accurate but time consuming. Variational approximation supposes an unknown variational distribution from a tractable family then determines it by solving an optimization problem, which is efficient but tedious mathematical derivation is required for different models.
Since finite mixture models are universal approximators for any continuous probability density, we study the use of Gaussian Mixture Models (GMMs) in Bayesian inference in this dissertation. Firstly, we propose a novel density simplification algorithm that can well preserve the original mixture distribution by directly grouping the base probability densities. We derive the new densitypreserving Hierarchical EM (DPHEM) algorithm from first principles, which is based on a variational approximation to the expected loglikelihood between two mixture models. In particular, in order to develop a complete framework for recursive Bayesian filtering, we propose an algorithm to approximate an arbitrary likelihood function as sums of scaled Gaussians.
Applications on recursive Bayesian filtering, KDE mixture reduction and belief propagation show that the proposed algorithm can be widely used for probabilistic data analysis and is more accurate than other mixture simplification methods.
Then, we propose an approximate inference algorithm for robust Gaussian Process (GP) regression with GMMs representing arbitrary nonGaussian observation likelihoods by using DPHEM. In addition to the latent function variables, we introduce another set of multinomial latent variables which indicates the mixture component generating each noisy observation. Then the hyperparameters in GP kernel function and GMM likelihood are learnt by maximizing the expected complete data loglikelihood under the posterior distribution of all latent variables. In order to compute this expectation in a well represented closedform, we propose an algorithm to approximate the posterior process of latent function by hierarchically combining posteriors on incremental subsets of the data, which can also be used for prediction on unseen inputs. The proposed algorithm is more succinct and experiments on synthetic and real data show that it is also more applicable to many cases where complicated mathematical derivations are required for other variational approximate algorithms.