Modeling, Clustering, and Segmenting Video with Mixtures of Dynamic Textures

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

344 Scopus Citations
View graph of relations



Original languageEnglish
Pages (from-to)909-926
Journal / PublicationIEEE Transactions on Pattern Analysis and Machine Intelligence
Issue number5
Online published5 Jul 2007
Publication statusPublished - May 2008
Externally publishedYes


A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectation-maximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time-series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (e.g. fire, steam, water, vehicle and pedestrian traffic, etc.). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (e.g. optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes.

Research Area(s)

  • Dynamic texture, Expectation-maximization, Kalman filter, Linear dynamical systems, Mixture models, Motion segmentation, Probabilistic models, Temporal textures, Time-series clustering, Video clustering, Video modeling