Modeling and training emotional talking faces of virtual actors in synthetic movies

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)22_Publication in policy or professional journal

1 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Journal / PublicationProceedings of SPIE - The International Society for Optical Engineering
Volume4067
Publication statusPublished - 2000
Externally publishedYes

Conference

TitleVisual Communications and Image Processing 2000
CityPerth, Aust
Period20 - 23 June 2000

Abstract

Three dimensional computer animated movies with synthetic characters have recently emerged as a new form of entertainment. The technology available today is capable of producing near-realistic rendered 3D computer graphics models of expressive, talking, acting humanoids and other characters in life-like scenes. However, the component of work that needs to be done by skilled animators and artists in producing these synthetic character performances is quite significant. This paper presents an overview of a virtual actor system composed of several subsystems designed to automate some of these animation tasks. Our emphasis is on the facial animation of virtual actors. The paper specifically details the situational processor component of the framework, which is a major building block in the automatic virtual actor system. An expert system using a fuzzy knowledge-based control system is used to realize the automated system. Fuzzy linguistic rules are used to train virtual actors to know the appropriate emotions and gestures to use in different situations of a synthetic movie, the higher level parameters of which are provided by human directors. Theories of emotion, personality, dialogue, and acting, as well as empirical evidence is incorporated into our framework and knowledge bases to produce promising results.