Modeling music as a dynamic texture

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

49 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Pages (from-to)602-612
Journal / PublicationIEEE Transactions on Audio, Speech and Language Processing
Volume18
Issue number3
Publication statusPublished - Mar 2010
Externally publishedYes

Abstract

We consider representing a short temporal fragment of musical audio as a dynamic texture, a model of both the timbral and rhythmical qualities of sound, two of the important aspects required for automatic music analysis. The dynamic texture model treats a sequence of audio feature vectors as a sample from a linear dynamical system. We apply this new representation to the task of automatic song segmentation. In particular, we cluster audio fragments, extracted from a song, as samples from a dynamic texture mixture (DTM) model. We show that the DTM model can both accurately cluster coherent segments in music and detect transition boundaries. Moreover, the generative character of the proposed model of music makes it amenable for a wide range of applications besides segmentation. As examples, we use DTM models of songs to suggest possible improvements in other music information retrieval applications such as music annotation and similarity. © 2006 IEEE.

Research Area(s)

  • Automatic segmentation, Dynamic texture model (DTM), Music modeling, Music similarity

Citation Format(s)

Modeling music as a dynamic texture. / Barrington, Luke; Chan, Antoni B.; Lanckriet, Gert.
In: IEEE Transactions on Audio, Speech and Language Processing, Vol. 18, No. 3, 03.2010, p. 602-612.

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review