Semantic Analysis and Annotation of Images by a novel Hierarchical Spatial Hidden Markov Model

Project: Research

View graph of relations

Description

Previous work on content-based image retrieval (CBIR) has highlighted the limitations of CBIR that assumes a mapping exists between low-level features and high-level semantics. It is now clear that this assumption does not hold for many applications and the term “semantic gap” has been introduced to reflect this observation. Consequently, automatic semantic annotation and retrieval of images has become an area of active research. Here the researchers propose a new hierarchical spatial hidden Markov (HS-HMM) model for automatically generating textual annotation for images. HS-HMM is hierarchical generalization of the researchers’ previous work on spatial HMM and is motivated by the observations that hierarchical, recursive, or repetitive structure occurs everywhere in the natural world, and that consistent spatial and cross-scales relations exist among the semantic concepts that are common in a specific class of images. HS-HMM differs from previous hierarchical HMM approaches in that it aims to extract and model the spatial and contextual relations of hidden states found in a class of images within and across scales and therefore opens up a new multi-scales approach to bridging the semantic gap.

Detail(s)

Project number9041233
Grant typeGRF
StatusFinished
Effective start/end date1/10/0713/04/11