Gradient-based Visual Explanation for Transformer-based CLIP

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

2 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Title of host publicationProceedings of the 41st International Conference on Machine Learning
Pages61072-61091
Publication statusPublished - Jul 2024

Publication series

NameProceedings of Machine Learning Research
Volume235
ISSN (Print)2640-3498

Conference

Title41st International Conference on Machine Learning (ICML 2024)
LocationMesse Wien Exhibition Congress Center
PlaceAustria
CityVienna
Period21 - 27 July 2024

Abstract

Significant progress has been achieved on the improvement and downstream usages of the Contrastive Language-Image Pre-training (CLIP) vision-language model, while less attention is paid to the interpretation of CLIP. We propose a Gradient-based visual Explanation method for CLIP (Grad-ECLIP), which interprets the matching result of CLIP for specific input image-text pair. By decomposing the architecture of the encoder and discovering the relationship between the matching similarity and intermediate spatial features, Grad-ECLIP produces effective heat maps that show the influence of image regions or words on the CLIP results. Different from the previous Transformer interpretation methods that focus on the utilization of self-attention maps, which are typically extremely sparse in CLIP, we produce high-quality visual explanations by applying channel and spatial weights on token features. Qualitative and quantitative evaluations verify the superiority of Grad-ECLIP compared with the state-of-the-art methods. A series of analysis are conducted based on our visual explanation results, from which we explore the working mechanism of image-text matching, and the strengths and limitations in attribution identification of CLIP. Codes are available here: https://github.com/Cyang-Zhao/Grad-Eclip.

© 2024 by the author(s).

Bibliographic Note

Research Unit(s) information for this publication is provided by the author(s) concerned.

Citation Format(s)

Gradient-based Visual Explanation for Transformer-based CLIP. / Zhao, Chenyang; Wang, Kun; Zeng, Xingyu et al.
Proceedings of the 41st International Conference on Machine Learning. 2024. p. 61072-61091 (Proceedings of Machine Learning Research; Vol. 235).

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review