Towards textually describing complex video contents with audio-visual concept classifiers

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

31 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publicationMM'11 - Proceedings of the 2011 ACM Multimedia Conference and Co-Located Workshops
Pages655-658
Publication statusPublished - 2011

Conference

Title19th ACM International Conference on Multimedia (ACM Multimedia Conference 2011)
PlaceUnited States
CityScottsdale
Period28 November - 1 December 2011

Abstract

Automatically generating compact textual descriptions of complex video contents has wide applications. With the recent advancements in automatic audio-visual content recognition, in this paper we explore the technical feasibility of the challenging issue of precisely recounting video contents. Based on cutting-edge automatic recognition techniques, we start from classifying a variety of visual and audio concepts in video contents. According to the classification results, we apply simple rule-based methods to generate textual descriptions of video contents. Results are evaluated by conducting carefully designed user studies. We find that the state-of-the-art visual and audio concept classification, although far from perfect, is able to provide very useful clues indicating what is happening in the videos. Most users involved in the evaluation confirmed the informativeness of our machine-generated descriptions. © 2011 ACM.

Research Area(s)

  • Audio-visual concept classification, Textual descriptions of video content

Citation Format(s)

Towards textually describing complex video contents with audio-visual concept classifiers. / Tan, Chun Chet; Jiang, Yu-Gang; Ngo, Chong-Wah.
MM'11 - Proceedings of the 2011 ACM Multimedia Conference and Co-Located Workshops. 2011. p. 655-658.

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review