Gradient-Based Instance-Specific Visual Explanations for Object Specification and Object Discrimination
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Number of pages | 18 |
Journal / Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence |
Publication status | Online published - 22 Mar 2024 |
Link(s)
DOI | DOI |
---|---|
Attachment(s) | Documents
Publisher's Copyright Statement
|
Link to Scopus | https://www.scopus.com/record/display.uri?eid=2-s2.0-85188896489&origin=recordpage |
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(cbe72029-fadc-42d1-a322-0b38a7488f98).html |
Abstract
We propose the gradient-weighted Object Detector Activation Maps (ODAM), a visual explanation technique for interpreting the predictions of object detectors. Utilizing the gradients of detector targets flowing into the intermediate feature maps, ODAM produces heat maps that show the influence of regions on the detector's decision for each predicted attribute. Compared to previous works on classification activation maps (CAM), ODAM generates instance-specific explanations rather than class-specific ones. We show that ODAM is applicable to one-stage, two-stage, and transformer-based detectors with different types of detector backbones and heads, and produces higher-quality visual explanations than the state-of-the-art in terms of both effectiveness and efficiency. We discuss two explanation tasks for object detection: 1) object specification: what is the important region for the prediction? 2) object discrimination: which object is detected? Aiming at these two aspects, we present a detailed analysis of the visual explanations of detectors and carry out extensive experiments to validate the effectiveness of the proposed ODAM. Furthermore, we investigate user trust on the explanation maps, how well the visual explanations of object detectors agrees with human explanations, as measured through human eye gaze, and whether this agreement is related with user trust. Finally, we also propose two applications, ODAM-KD and ODAM-NMS, based on these two abilities of ODAM. ODAM-KD utilizes the object specification of ODAM to generate top-down attention for key predictions and instruct the knowledge distillation of object detection. ODAM-NMS considers the location of the model's explanation for each prediction to distinguish the duplicate detected objects. A training scheme, ODAM-Train, is proposed to improve the quality on object discrimination, and help with ODAM-NMS. The code of ODAM is available: <uri>https://github.com/Cyang-Zhao/ODAM</uri>. IEEE
Research Area(s)
- Deep learning, Detectors, explainable AI, explaining object detection, gradient-based explanation, Heat maps, human eye gaze, instance-level explanation, knowledge distillation, non-maximum suppression, Object detection, object discrimination, object specification, Predictive models, Task analysis, Transformers, Visualization
Bibliographic Note
Research Unit(s) information for this publication is provided by the author(s) concerned.
Citation Format(s)
Gradient-Based Instance-Specific Visual Explanations for Object Specification and Object Discrimination. / Zhao, Chenyang; Hsiao, Janet H.; Chan, Antoni B.
In: IEEE Transactions on Pattern Analysis and Machine Intelligence, 22.03.2024.
In: IEEE Transactions on Pattern Analysis and Machine Intelligence, 22.03.2024.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Download Statistics
No data available