Deep triplet residual quantization

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

4 Scopus Citations
View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Article number115467
Journal / PublicationExpert Systems with Applications
Volume184
Online published24 Jun 2021
Publication statusPublished - 1 Dec 2021

Abstract

Quantization techniques have been widely used in the approximate near neighbor similarity search, data compression, etc. Recently, metric learning based deep hashing methods take advantage of quantization techniques to accelerate the computation with minimal accuracy loss. However, most of the existing deep quantization methods are designed for Euclidean distance, which may not lead to good performance in maximum inner product search (MIPS). In addition, metric learning requires an elaborated training strategy for sample selection, which matters in learning high-quality feature representation and boosting the convergent speed of the network. In this paper, we propose a novel deep triplet residual quantization (DTRQ) model that integrates the residual quantization (RQ) into the triplet selection strategy and the quantization error control of MIPS. Specifically, instead of randomly grouping the samples as in DTQ, we group the samples based on the geographical information provided by RQ so that each group can generate more high-quality triplets for faster convergence. Furthermore, we decompose the triplet quantization loss into the norm and angle aspect, which especially reduce the codeword redundancy in MIPS ranking. By stringing the residual quantization through the triplet selection stage and quantization error control, DTRQ can generate high-quality and compact binary codes, which yields promising image retrieval performance on three benchmark datasets, NUS-WIDE, CIFAR-10, and MS-COCO.

Research Area(s)

  • Approximate nearest neighbor search, Deep learning, Quantization, Triplet loss