Speech utterance classification model training without manual transcriptions

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

11 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Title of host publicationICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
PagesI553-I556
Volume1
Publication statusPublished - 2006
Externally publishedYes

Publication series

Name
Volume1
ISSN (Print)1520-6149

Conference

Title2006 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2006
PlaceFrance
CityToulouse
Period14 - 19 May 2006

Abstract

Speech utterance classification has been widely applied to a variety of spoken language understanding tasks, including call routing, dialog systems, and command and control. Most speech utterance classification systems adopt a data-driven statistical learning approach, which requires manually transcribed and annotated training data. In this paper we introduce a novel classification model training approach based on unsupervised language model adaptation. It only requires wave files of the training speech utterances and their corresponding classification destinations for modeling training. No manual transcription of the utterances is necessary. Experimental results show that this approach, which is much cheaper to implement, has achieved classification accuracy at the same level as the model trained with manual transcriptions. © 2006 IEEE.

Citation Format(s)

Speech utterance classification model training without manual transcriptions. / Wang, Ye-Yi; Lee, John; Acero, Alex.
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Vol. 1 2006. p. I553-I556 1660080.

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review