Gradient-based Visual Explanation for Transformer-based CLIP
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | Proceedings of the 41st International Conference on Machine Learning |
Pages | 61072-61091 |
Publication status | Published - Jul 2024 |
Publication series
Name | Proceedings of Machine Learning Research |
---|---|
Volume | 235 |
ISSN (Print) | 2640-3498 |
Conference
Title | 41st International Conference on Machine Learning (ICML 2024) |
---|---|
Location | Messe Wien Exhibition Congress Center |
Place | Austria |
City | Vienna |
Period | 21 - 27 July 2024 |
Link(s)
Document Link | Links
|
---|---|
Link to Scopus | https://www.scopus.com/record/display.uri?eid=2-s2.0-85203792298&origin=recordpage |
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(9d43862a-5abb-442b-9deb-87a4a7faf9d7).html |
Abstract
Significant progress has been achieved on the improvement and downstream usages of the Contrastive Language-Image Pre-training (CLIP) vision-language model, while less attention is paid to the interpretation of CLIP. We propose a Gradient-based visual Explanation method for CLIP (Grad-ECLIP), which interprets the matching result of CLIP for specific input image-text pair. By decomposing the architecture of the encoder and discovering the relationship between the matching similarity and intermediate spatial features, Grad-ECLIP produces effective heat maps that show the influence of image regions or words on the CLIP results. Different from the previous Transformer interpretation methods that focus on the utilization of self-attention maps, which are typically extremely sparse in CLIP, we produce high-quality visual explanations by applying channel and spatial weights on token features. Qualitative and quantitative evaluations verify the superiority of Grad-ECLIP compared with the state-of-the-art methods. A series of analysis are conducted based on our visual explanation results, from which we explore the working mechanism of image-text matching, and the strengths and limitations in attribution identification of CLIP. Codes are available here: https://github.com/Cyang-Zhao/Grad-Eclip.
© 2024 by the author(s).
© 2024 by the author(s).
Bibliographic Note
Research Unit(s) information for this publication is provided by the author(s) concerned.
Citation Format(s)
Gradient-based Visual Explanation for Transformer-based CLIP. / Zhao, Chenyang; Wang, Kun; Zeng, Xingyu et al.
Proceedings of the 41st International Conference on Machine Learning. 2024. p. 61072-61091 (Proceedings of Machine Learning Research; Vol. 235).
Proceedings of the 41st International Conference on Machine Learning. 2024. p. 61072-61091 (Proceedings of Machine Learning Research; Vol. 235).
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review