Concept-driven multi-modality fusion for video search

Xiao-Yong Wei, Yu-Gang Jiang, Chong-Wah Ngo

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

45 Citations (Scopus)

Abstract

As it is true for human perception that we gather information from different sources in natural and multi-modality forms, learning from multi-modalities has become an effective scheme for various information retrieval problems. In this paper, we propose a novel multi-modality fusion approach for video search, where the search modalities are derived from a diverse set of knowledge sources, such as text transcript from speech recognition, low-level visual features from video frames, and high-level semantic visual concepts from supervised learning. Since the effectiveness of each search modality greatly depends on specific user queries, prompt determination of the importance of a modality to a user query is a critical issue in multi-modality search. Our proposed approach, named concept-driven multi-modality fusion (CDMF), explores a large set of predefined semantic concepts for computing multi-modality fusion weights in a novel way. Specifically, in CDMF, we decompose the query-modality relationship into two components that are much easier to compute: query-concept relatedness and concept-modality relevancy. The former can be efficiently estimated online using semantic and visual mapping techniques, while the latter can be computed offline based on concept detection accuracy of each modality. Such a decomposition facilitates the need of adaptive learning of fusion weights for each user query on-the-fly, in contrast to the existing approaches which mostly adopted predefined query classes and/or modality weights. Experimental results on TREC video-retrieval evaluation 20052008 dataset validate the effectiveness of our approach, which outperforms the existing multi-modality fusion methods and achieves near-optimal performance (from oracle fusion) for many test queries. © 2011 IEEE.
Original languageEnglish
Article number5686924
Pages (from-to)62-73
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume21
Issue number1
DOIs
Publication statusPublished - Jan 2011

Research Keywords

  • concept
  • Concept-driven fusion
  • multi-modality
  • semantic
  • video search

Fingerprint

Dive into the research topics of 'Concept-driven multi-modality fusion for video search'. Together they form a unique fingerprint.

Cite this