Semantic reasoning in zero example video event retrieval

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

16 Scopus Citations
View graph of relations

Author(s)

  • Maaike H. T DE BOER
  • Klamer SCHUTTE
  • Wessel KRAAIJ

Related Research Unit(s)

Detail(s)

Original languageEnglish
Article number60
Journal / PublicationACM Transactions on Multimedia Computing, Communications and Applications
Volume13
Issue number4
Online publishedOct 2017
Publication statusPublished - Nov 2017

Abstract

Searching in digital video data for high-level events, such as a parade or a car accident, is challenging when the query is textual and lacks visual example images or videos. Current research in deep neural networks is highly beneficial for the retrieval of high-level events using visual examples, but without examples it is still hard to (1) determine which concepts are useful to pre-train (Vocabulary challenge) and (2) which pre-trained concept detectors are relevant for a certain unseen high-level event (Concept Selection challenge). In our article, we present our Semantic Event Retrieval System which (1) shows the importance of high-level concepts in a vocabulary for the retrieval of complex and generic high-level events and (2) uses a novel concept selection method (i-w2v) based on semantic embeddings. Our experiments on the international TRECVID Multimedia Event Detection benchmark show that a diverse vocabulary including high-level concepts improves performance on the retrieval of high-level events in videos and that our novel method outperforms a knowledge-based concept selection method.

Research Area(s)

  • Content-based visual information retrieval, Multimedia event detection, Semantics, Zero shot

Bibliographic Note

Full text of this publication does not contain sufficient affiliation information. With consent from the author(s) concerned, the Research Unit(s) information for this record is based on the existing academic department affiliation of the author(s).