Abstract
This paper describes a new and efficient method for facial expression generation on cloned synthetic head models. The system uses abstract facial muscles called action units (AUs) based on both anatomical muscles and the facial action coding system. The facial expression generation method has real-time performance, is less computationally expensive than physically based models, and has greater anatomical correspondence than rational free-form deformation or spline-based techniques. Automatic cloning of a real human head is done by adapting a generic facial and head mesh to Cyberware laser scanned data. The conformation of the generic head to the individual data and the fitting of texture onto it are based on a fully automatic feature extraction procedure. Individual facial animation parameters are also automatically estimated during the conformation process. The entire animation system is hierarchical; emotions and visemes (the visual mouth shapes that occur during speech) are defined in terms of the AUs, and higher-level gestures are defined in terms of AUs, emotions, and visemes as well as the temporal relationships between them. The main emphasis of the paper is on the abstract muscle model, along with limited discussion on the automatic cloning process and higher-level animation control aspects.
| Original language | English |
|---|---|
| Pages (from-to) | 447-484 |
| Journal | Circuits, Systems, and Signal Processing |
| Volume | 20 |
| Issue number | 3-4 |
| DOIs | |
| Publication status | Published - 2001 |
| Externally published | Yes |
Research Keywords
- Facial animation
- Facial modeling
- Multimedia actors
- Talking heads
- Virtual cloning