Saliency-guided Pairwise Matching

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

2 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)37-43
Journal / PublicationPattern Recognition Letters
Volume97
Online published24 Jun 2017
Publication statusPublished - 1 Oct 2017

Abstract

The need for fast retrieving images has recently increased tremendously in many application areas, e.g., biomedicine, military, commerce, education. Researchers from cognitive psychology and neurobiology suggest that humans have a strong ability to perceive objects before identifying them, and human attention theories hypothesize that the human vision system (HVS) processes only parts of an image in details, while leaves others nearly unprocessed. In this work, we assume that humans prefer the salient regions when measuring the similarity of an image pair. The proposed saliency detection calculates the local saliency by formulating the center-surround hypothesis via residual reconstruction, together with the multi-scale factor to eliminate the impacts caused by over-segmentation. The global saliency is estimated based on the center bias hypothesis, followed by the saliency fusion to calculate the superpixel-level saliency map. Salient regions are then generated via region growth, and integrated region matching (IRM) is finally adopted to formulate the distance metric. The experimental results on publicly available datasets show that the proposed method achieves satisfactory performance on both saliency detection and pairwise matching.

Research Area(s)

  • Pairwise matching, Reconstruction residual, Saliency detecion

Citation Format(s)

Saliency-guided Pairwise Matching. / Huang, Shao; Wang, Weiqiang.
In: Pattern Recognition Letters, Vol. 97, 01.10.2017, p. 37-43.

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review