Automatic generation of hand action codes (scripts) from key frame images for hand gesture synthesis and animation

從關鍵幀中自動産生手部動作編碼 (文本) 並進行手勢合成和動畫顯示

Student thesis: Doctoral Thesis

View graph of relations


  • Sau Wai Maria LAM

Related Research Unit(s)


Awarding Institution
Award date2 Oct 2002


Hand gesture interpretation and hand motion animation is receiving increasing attention in both the computer vision and the computer graphics communities due to their potential applications in advanced man-machine interface and content production in multimedia entertainment. The human hand is capable of a large variety of functions, from pointing at objects and grasping objects of various shapes to tactile exploration, expressing our feelings and communicating with others. In this work, three major processes are involved to visualize natural hand movement; they are motion data acquisition and analysis, data conversion into motion parameters, and motion generation carried out by a synthetic hand model. For hand modeling and motion animation, we extended a previous work that is based upon the human hand anatomy and controlled by a hand gesture coding system, which we called Hand Action Coding System (HACS) for codifying hand motion in terms of hand muscle action units (HAU). The HACS together with the anatomy-based hand model allows complex sequence of hand gestures to be animated using high-level textual scripts. Much time and human effort can be saved if the system is able to generate HAU script for hand motion animation directly from analyzing real hand motion. This is the objective of this project and this thesis presents the resulting techniques for achieving this objective. After reviewing the anatomy-based approach, the hand model has been subsequently refined and made more realistic. To prove that the HAU list can be identified from the orientation of the bone segments of the hand configuration and the anatomy approach will simulate the in-between animation sequence naturally, an interactive solution is developed to explain the techniques for generating HAU list from target gesture images. Then to build a fully automate gesture input and analysis system, the hand motion tracking, visual feature extraction, feature occlusion and other image-processing problems have to be considered. As human hand does not have any natural distinctive features, therefore, instead of using an uncovered hand, color markers are attached to a gloved hand to provide the necessary features. Techniques and algorithms are then used to select the key frames, detect the color landmarks, and determine the HAU script through analysis of the spatial and the anatomical constraint relationship of these landmarks. Other contributions of the proposed approach include (1) there is no limitation on the input gestures, images of any arbitrary gesture can be taken as input materials, (2) occlusion problem is resolved or reduced to a minimum extent through the analysis of the spatial and the anatomical constraint relationship of the existing landmarks, (3) no high-speed motion tracker is needed as the system is able to generate and animate the intermediate motion from only the key gesture frames appeared in the video sequence of hand motion, (4) any section of the video sequence of hand motion can be extracted and reused without the redo of the whole motion capture process, and (5) a gesture database can be easily built incrementally and automatically. As a by-product, the gesture analysis results can potentially be applied to inferring which muscle or group of muscles are involved in a gesture.

    Research areas

  • Human locomotion, Computer simulation, Computer animation