Optimisation of HMM topology and its model parameters by genetic algorithms

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

78 Scopus Citations
View graph of relations



Original languageEnglish
Pages (from-to)509-522
Journal / PublicationPattern Recognition
Issue number2
Publication statusPublished - Feb 2001


Hidden Markov model (HMM) is currently the most popular approach to speech recognition. However, the problems of finding a good HMM model and its optimised model parameters are still of great interest to the researchers in this area. In our previous work, we have successfully applied the genetic algorithm (GA) to the HMM training process to obtain the optimised model parameters (Chau et al. Proc. ICASSP (1997) 1727) of the HMM models. In this paper, we further extend our work and propose a new training method based on GA and Baum-Welch algorithms to obtain an HMM model with optimised number of states in the HMM models and its model parameters. In this work, we are not only able to overcome the shortcomings of the slow convergence speed of the simple GA-HMM approach. In addition, this method also finds better number of states in the HMM topology as well as its model parameters. From our experiments with the 100 words extracted from the TIMIT corpus, our method is able to find the optimal topology in all cases. In addition, the HMMs trained by our GA HMM training have a better recognition capability than the HMMs trained by the Baum-Welch algorithm. In addition, 290 words are randomly selected from the TMIIT database for testing the recognition performances of both approaches, it is found that the GA-HMM approach has a recognition rate of 95.86% while the Baum-Welch method has a recognition rate of 93.1%. This implies that the HMMs trained by our GA-HMM method are more optimised than the HMMs trained by the Baum-Welch method.