Script-based facial gesture and speech animation using a nurbs based face model

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

13 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)881-891
Journal / PublicationComputers and Graphics (Pergamon)
Volume20
Issue number6 SPEC. ISS.
Publication statusPublished - Nov 1996

Abstract

In this paper, we present a technique for simulating different facial gestures and speech. The distinguishing features of this work are 2-fold. First, we adopt a four-level hierarchical, non-uniform rational B-spline (NURBS) based face model. The use of NURBS surface representation of the face has the advantages of increased smoothness and ease of reshape over other forms of geometric representation. Second, the mouth movement animation and sound production in speech are phoneme based and an English text to phoneme parser is used to translate any English text in speech into its phoneme equivalent. As phoneme is the basic unit of mouth movement and sound production, a phoneme based approach of speech animation resembles actual speech and allows arbitrary English text rather than a restricted set of tokens be spoken. A Facial Action Coding System is also adopted to control the modification of the face model as it describes the basis of facial expression. Further, a user interface is developed which allows the user to edit interactively or load in a text script describing the animation sequence in terms of facial gesture names and English text. The system parses the English text in the text script to phoneme strings. The animation sequence described by the script can then be generated and played back in a flexible way. Copyright © 1996 Elsevier Science Ltd.