Go to:
Logótipo
Você está em: Start > Publications > View > Machine Learning Interpretability: A Survey on Methods and Metrics
Map of Premises
Principal
Publication

Machine Learning Interpretability: A Survey on Methods and Metrics

Title
Machine Learning Interpretability: A Survey on Methods and Metrics
Type
Another Publication in an International Scientific Journal
Year
2019
Authors
Carvalho, DV
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Pereira, EM
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Jaime S Cardoso
(Author)
FEUP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page View ORCID page
Journal
Vol. 8
Final page: 832
ISSN: 2079-9292
Publisher: MDPI
Other information
Authenticus ID: P-00Q-VNJ
Abstract (EN): Machine learning systems are becoming increasingly ubiquitous. These systems's adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate decision support systems remain complex black boxes, meaning their internal logic and inner workings are hidden to the user and even experts cannot fully understand the rationale behind their predictions. Moreover, new regulations and highly regulated domains have made the audit and verifiability of decisions mandatory, increasing the demand for the ability to question, understand, and trust machine learning systems, for which interpretability is indispensable. The research community has recognized this interpretability problem and focused on developing both interpretable models and explanation methods over the past few years. However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation? The aim of this article is to provide a review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics. Furthermore, a complete literature review is presented in order to identify future directions of work on this field.
Language: English
Type (Professor's evaluation): Scientific
No. of pages: 34
Documents
We could not find any documents associated to the publication.
Related Publications

Of the same journal

Open-source electronics platforms as enabling technologies for smart cities: Recent developments and perspectives (2018)
Another Publication in an International Scientific Journal
Costa D.G.; Duran-Faundez C.
Modulation Methods for Direct and Indirect Matrix Converters: A Review (2021)
Another Publication in an International Scientific Journal
Varajao, D; Rui Esteves Araújo
Electrochemical Sensor-Based Devices for Assessing Bioactive Compounds in Olive Oils: A Brief Review (2018)
Another Publication in an International Scientific Journal
Marx, IMG; Veloso, ACA; Dias, LG; Susana Casal; Pereira, JA; Peres, AM
User-Driven Fine-Tuning for Beat Tracking (2021)
Article in International Scientific Journal
António S. Pinto; Sebastian Böck; Jaime S. Cardoso; Matthew E. P. Davies
Transparent Control Flow Transfer between CPU and Accelerators for HPC (2021)
Article in International Scientific Journal
Daniel Granhão; João Canas Ferreira

See all (30)

Recommend this page Top
Copyright 1996-2025 © Faculdade de Medicina Dentária da Universidade do Porto  I Terms and Conditions  I Acessibility  I Index A-Z  I Guest Book
Page created on: 2025-07-07 at 22:13:46 | Acceptable Use Policy | Data Protection Policy | Complaint Portal