Automatic image annotation based on generalized relevance models

Zhiwu Lu, Horace H. S. Ip

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

2 Citations (Scopus)

Abstract

This paper presents a generalized relevance model for automatic image annotation through learning the correlations between images and annotation keywords. Different from previous relevance models that can only propagate keywords from the training images to the test ones, the proposed model can perform extra keyword propagation among the test images. We also give a convergence analysis of the iterative algorithm inspired by the proposed model. Moreover, to estimate the joint probability of observing an image with possible annotation keywords, we define the inter-image relations through proposing a new spatial Markov kernel based on 2D Markov models. The main advantage of our spatialMarkov kernel is that the intraimage context can be exploited for automatic image annotation, which is different from the traditional bagof-words methods. Experiments on two standard image databases demonstrate that the proposed model outperforms the state-of-the-art annotation models. © Springer Science+Business Media, LLC 2010.
Original languageEnglish
Pages (from-to)23-33
JournalJournal of Signal Processing Systems
Volume65
Issue number1
DOIs
Publication statusPublished - Oct 2011

Research Keywords

  • Automatic image annotation
  • Keyword propagation
  • Markov models
  • Relevance models

Fingerprint

Dive into the research topics of 'Automatic image annotation based on generalized relevance models'. Together they form a unique fingerprint.

Cite this