Distinctive Image Captioning via CLIP Guided Group Optimization

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publicationComputer Vision – ECCV 2022 Workshops
Subtitle of host publicationProceedings, Part IV
EditorsLeonid Karlinsky, Tomer Michaeli, Ko Nishino
PublisherSpringer, Cham
Pages223-238
Edition1
ISBN (electronic)978-3-031-25069-9
ISBN (print)978-3-031-25068-2
Publication statusPublished - Oct 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13804 LNCS
ISSN (Print)0302-9743
ISSN (electronic)1611-3349

Conference

Title17th European Conference on Computer Vision, ECCV 2022
PlaceIsrael
CityTel Aviv
Period23 - 27 October 2022

Abstract

Image captioning models are usually trained according to human annotated ground-truth captions, which could generate accurate but generic captions. In this paper, we focus on generating distinctive captions that can distinguish the target image from other similar images. To evaluate the distinctiveness of captions, we introduce a series of metrics that use large-scale vision-language pre-training model CLIP to quantify the distinctiveness. To further improve the distinctiveness of captioning models, we propose a simple and effective training strategy that trains the model by comparing target image with similar image group and optimizing the group embedding gap. Extensive experiments are conducted on various baseline models to demonstrate the wide applicability of our strategy and the consistency of metric results with human evaluation. By comparing the performance of our best model with existing state-of-the-art models, we claim that our model achieves new state-of-the-art towards distinctiveness objective. © 2023, The Author(s).

Research Area(s)

  • CLIP, Distinctive image captioning, Group embedding gap, Similar image group

Citation Format(s)

Distinctive Image Captioning via CLIP Guided Group Optimization. / Zhang, Youyuan; Wang, Jiuniu; Wu, Hao et al.
Computer Vision – ECCV 2022 Workshops: Proceedings, Part IV. ed. / Leonid Karlinsky; Tomer Michaeli; Ko Nishino. 1. ed. Springer, Cham, 2022. p. 223-238 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 13804 LNCS).

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review