Modeling of close interactions between characters for multiple character motion synthesis and interactive application


Student thesis: Doctoral Thesis

View graph of relations


  • Chun Pong CHAN

Related Research Unit(s)


Awarding Institution
Award date3 Oct 2014


Nowadays motion capture is widely used in entertainment like animations, movies, game, etc. There has been a lot of evolution of motion capture technology these years. The price becomes lower and the devices become smaller such as the release of Microsoft Kinect. However, the quality of the captured motion is still not enough for making fine animations and movies because of the accuracy of data and limitations in capturing process. The industries still are relying on expensive devices and the most common one is optical capturing system. The operation of those systems is costly as they take a lot of time to capture the motion and post-process the data before the data is ready to be used. This makes motion capture technology unaffordable by the animators and that is why Keyframing is still the most popular technology for creating animation. Indeed, sometimes using Keyframing is a painful process for animator especially when there are many characters in a scene or when movements of the characters are complex. As capturing is difficult and expensive, the captured data should be utilized wisely. In this thesis, we studied novel methods to analyze the captured data for different applications and synthesize new multiple-character motion data using existing motion data. First, we present methods to analyze the captured data for motion training, virtual partner control and retrieval. For motion training, we track the real-time data of users and compare them with pre-captured data of professional. Three types of visual feedbacks are provided to let users know where the errors of their movements are, the level of error and how to correct them. Experiment result shows that our training can significantly help users to learn comparing to traditional self training method. For virtual partner control, we applied kd-tree indexing and k-nearest neighboring method to identify the real-time captured pose of the user. As the user's pose may be different from the pose in the database, the pose pair in the database is adapted to the user's pose in real time while the spatial relationships in the pose pair are preserved. Experiment result shows that our method can handle different types of motion, achieve fast and accurate searching rate and create natural movement. For searching suitable motion data, using text to describe may not be enough. We proposed a motion retrieval method which searches for the existing two-character motions with similar context of interaction to the query motion data. The similarity between two motions is based on the spatial features measured by Laplacian coordinates and topology structure. The temporal features are indicated by the changes of Minimum spanning tree composed from the joint positions. Capturing multiple character motion data is challenging and the difficulty increases with the number of characters involved. Existing methods require user to setup complex constraints manually and cannot enrich the style of interaction in the existing data. We present a novel method for synthesizing new two-character motion by merging two existing two-character motion data without the need to specify the constraints manually. The spatial relationships between two characters are modeled by their relative moving trajectories in contact and avoid interaction. The output two-character motions are synthesized by space time optimization to preserve the spatial relationship and the local detail of individual character. We further extend the method so that user can create a scene with more than two characters interacting with each other. User can design the scene with any number of characters and control the interaction between the characters. The output animations showed that our method is able to create a scene of multiple character interactions in which the characters' positions are allocated properly and they interact with each other like the characters in the inputs. We tackle the problem of lack of motion data by our analysis and synthesis methods. By analyzing the local detail of individual character and interaction between characters in the captured data, the data can be recycled to build three different interactive applications, motion training, interactive character control and motion retrieval. By merging the captured two-character motion data, animator can create animations with multiple characters. Various applications implemented with our proposed methods show their usefulness in real life.

    Research areas

  • Computer animation, Characters and characteristics, Computer simulation, Motion