Abstract
Image captioning models are usually trained according to human annotated ground-truth captions, which could generate accurate but generic captions. In this paper, we focus on generating distinctive captions that can distinguish the target image from other similar images. To evaluate the distinctiveness of captions, we introduce a series of metrics that use large-scale vision-language pre-training model CLIP to quantify the distinctiveness. To further improve the distinctiveness of captioning models, we propose a simple and effective training strategy that trains the model by comparing target image with similar image group and optimizing the group embedding gap. Extensive experiments are conducted on various baseline models to demonstrate the wide applicability of our strategy and the consistency of metric results with human evaluation. By comparing the performance of our best model with existing state-of-the-art models, we claim that our model achieves new state-of-the-art towards distinctiveness objective. © 2023, The Author(s).
Original language | English |
---|---|
Title of host publication | Computer Vision – ECCV 2022 Workshops |
Subtitle of host publication | Proceedings, Part IV |
Editors | Leonid Karlinsky, Tomer Michaeli, Ko Nishino |
Publisher | Springer, Cham |
Pages | 223-238 |
Edition | 1 |
ISBN (Electronic) | 978-3-031-25069-9 |
ISBN (Print) | 978-3-031-25068-2 |
DOIs | |
Publication status | Published - Oct 2022 |
Event | 17th European Conference on Computer Vision (ECCV 2022) - Hybrid, Tel-Aviv, Israel Duration: 23 Oct 2022 → 27 Oct 2022 https://eccv2022.ecva.net/ |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 13804 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 17th European Conference on Computer Vision (ECCV 2022) |
---|---|
Abbreviated title | ECCV’22 |
Country/Territory | Israel |
City | Tel-Aviv |
Period | 23/10/22 → 27/10/22 |
Internet address |
Research Keywords
- CLIP
- Distinctive image captioning
- Group embedding gap
- Similar image group