Generalized Competitive Learning of Gaussian Mixture Models

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalNot applicable

12 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Pages (from-to)901-909
Journal / PublicationIEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Volume39
Issue number4
Online published7 Apr 2009
Publication statusPublished - Aug 2009

Abstract

When fitting Gaussian mixtures to multivariate data, it is crucial to select the appropriate number of Gaussians, which is generally referred to as the model selection problem. Under regularization theory, we aim to solve this model selection problem through developing an entropy regularized likelihood (ERL) learning on Gaussian mixtures. We further present a gradient algorithm for this ERL learning. Through some theoretic analysis, we have shown a mechanism of generalized competitive learning that is inherent in the ERL learning, which can lead to automatic model selection on Gaussian mixtures and also make our ERL learning algorithm less sensitive to the initialization as compared to the standard expectation-maximization algorithm. The experiments on simulated data using our algorithm verified our theoretic analysis. Moreover, our ERL learning algorithm has been shown to outperform other competitive learning algorithms in the application of unsupervised image segmentation.

Research Area(s)

  • Gaussian mixture model, entropy regularized likelihood learning, generalized competitive learning