Modeling Spatial and Temporal Variation in Motion Data

Manfred Lau, Ziv Bar-Joseph, James Kuffner

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

We present a novel method to model and synthesize variation in motion data. Given a few examples of a particular type of motion as input, we learn a generative model that is able to synthesize a family of spatial and temporal variants that are statistically similar to the input examples. The new variants retain the features of the original examples, but are not exact copies of them. We learn a Dynamic Bayesian Network model from the input examples that enables us to capture properties of conditional independence in the data, and model it using a multivariate probability distribution. We present results for a variety of human motion, and 2D handwritten characters. We perform a user study to show that our new variants are less repetitive than typical game and crowd simulation approaches of re-playing a small number of existing motion clips. Our technique can synthesize new variants efficiently and has a small memory requirement. © 2009, ACM. All rights reserved.
Original languageEnglish
Article number171
JournalACM Transactions on Graphics
Volume28
Issue number5
DOIs
Publication statusPublished - Dec 2009
Externally publishedYes

Bibliographical note

The publication is also published in Proceedings - ACM SIGGRAPH Asia 2009 papers.

Research Keywords

  • Human Animation
  • Machine Learning
  • Motion Capture
  • Variation

Fingerprint

Dive into the research topics of 'Modeling Spatial and Temporal Variation in Motion Data'. Together they form a unique fingerprint.

Cite this