Go to:
Logótipo
Comuta visibilidade da coluna esquerda
Você está em: Start > Publications > View > Towards a Joint Approach to Produce Decisions and Explanations Using CNNs
Publication

Publications

Towards a Joint Approach to Produce Decisions and Explanations Using CNNs

Title
Towards a Joint Approach to Produce Decisions and Explanations Using CNNs
Type
Article in International Conference Proceedings Book
Year
2020
Authors
Isabel Rio-Torto
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. View Authenticus page Without ORCID
Kelwin Fernandes
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Conference proceedings International
Pages: 3-15
9th Iberian Conference on Pattern Recognition and Image Analysis, IbPRIA 2019
1 July 2019 through 4 July 2019
Indexing
Other information
Authenticus ID: P-00R-FAP
Resumo (PT):
Abstract (EN): Convolutional Neural Networks, as well as other deep learning methods, have shown remarkable performance on tasks like classification and detection. However, these models largely remain black-boxes. With the widespread use of such networks in real-world scenarios and with the growing demand of the right to explanation, especially in highly-regulated areas like medicine and criminal justice, generating accurate predictions is no longer enough. Machine learning models have to be explainable, i.e., understandable to humans, which entails being able to present the reasons behind their decisions. While most of the literature focuses on post-model methods, we propose an in-model CNN architecture, composed by an explainer and a classifier. The model is trained end-to-end, with the classifier taking as input not only images from the dataset but also the explainer¿s resulting explanation, thus allowing for the classifier to focus on the relevant areas of such explanation. We also developed a synthetic dataset generation framework, that allows for automatic annotation and creation of easy-to-understand images that do not require the knowledge of an expert to be explained. Promising results were obtained, especially when using L1 regularisation, validating the potential of the proposed architecture and further encouraging research to improve the proposed architecture¿s explainability and performance. © 2019, Springer Nature Switzerland AG.
Language: English
Type (Professor's evaluation): Scientific
No. of pages: 13
Documents
We could not find any documents associated to the publication.
Recommend this page Top
Copyright 1996-2025 © Faculdade de Direito da Universidade do Porto  I Terms and Conditions  I Acessibility  I Index A-Z
Page created on: 2025-09-29 at 04:54:20 | Privacy Policy | Personal Data Protection Policy | Whistleblowing | Electronic Yellow Book