Trustworthy Medical Analysis and Applications with Explainable Visual Representation and Decision Model Learning
DescriptionA significant challenge in artificial intelligence-based medical analysis is that the initiatives are severely hindered by the lack of explainability of the models, as the traditional data-driven approaches typically rely on black-box models without examining the inside rationales. Towards trustworthy medical analysis and applications, this project proposes a new framework centered onthe learning of explainable model decisions and visual data representations. The framework employs the stacked gradient-based attention methods to generate faithful explanations, which iteratively combine integrated gradients with inherent attention weights throughout the pathways to model decisions. Benefiting from the generated explanations, the framework further learns toadaptively optimize the resource allocation for trustworthy remote diagnosis. Finally, upon the successful development of the algorithms, our system is constructed to provide humanunderstandable rationales on medical data learning and automatic diagnosis. Our preliminary findings further show the promising faithfulness of the proposed explanation approaches.
|Effective start/end date||1/05/22 → …|