Go to:
Logótipo
Comuta visibilidade da coluna esquerda
Você está em: Start > Publications > View > Evaluating Visual Explainability in Chest X-Ray Pathology Detection
Publication

Publications

Evaluating Visual Explainability in Chest X-Ray Pathology Detection

Title
Evaluating Visual Explainability in Chest X-Ray Pathology Detection
Type
Article in International Conference Proceedings Book
Year
2024
Authors
Pereira, P
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Rocha, J
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Pedrosa, J
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. View Authenticus page Without ORCID
Ana Maria Mendonça
(Author)
FEUP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page View ORCID page
Conference proceedings International
Pages: 1116-1121
IEEE 22nd Mediterranean Electrotechnical Conference (MELECON)
Porto, PORTUGAL, JUN 25-27, 2024
Indexing
Publicação em ISI Web of Knowledge ISI Web of Knowledge - 0 Citations
Publicação em Scopus Scopus - 0 Citations
Other information
Authenticus ID: P-016-W3R
Abstract (EN): Chest X-Ray (CXR), plays a vital role in diagnosing lung and heart conditions, but the high demand for CXR examinations poses challenges for radiologists. Automatic support systems can ease this burden by assisting radiologists in the image analysis process. While Deep Learning models have shown promise in this task, concerns persist regarding their complexity and decision-making opacity. To address this, various visual explanation techniques have been developed to elucidate the model reasoning, some of which have received significant attention in literature and are widely used such as GradCAM. However, it is unclear how different explanations methods perform and how to quantitatively measure their performance, as well as how that performance may be dependent on the model architecture used and the dataset characteristics. In this work, two widely used deep classification networks - DenseNet121 and ResNet50 - are trained for multi-pathology classification on CXR and visual explanations are then generated using GradCAM, GradCAM++, EigenGrad-CAM, Saliency maps, LRP and DeepLift. These explanations methods are then compared with radiologist annotations using previously proposed explainability evaluations metrics - intersection over union and hit rate. Furthermore, a novel method to convey visual explanations in the form of radiological written reports is proposed, allowing for a clinically-oriented explainability evaluation metric - zones score. It is shown that Grad-CAM++ and Saliency methods offer the most accurate explanations and that the effectiveness of visual explanations is found to vary based on the model and corresponding input size. Additionally, the explainability performance across different CXR datasets is evaluated, highlighting that the explanation quality depends on the dataset's characteristics and annotations.
Language: English
Type (Professor's evaluation): Scientific
No. of pages: 6
Documents
We could not find any documents associated to the publication.
Recommend this page Top
Copyright 1996-2025 © Faculdade de Direito da Universidade do Porto  I Terms and Conditions  I Acessibility  I Index A-Z
Page created on: 2025-08-27 at 09:24:19 | Privacy Policy | Personal Data Protection Policy | Whistleblowing