Go to:
Logótipo
Comuta visibilidade da coluna esquerda
Você está em: Start > Publications > View > Interpretable Biometrics: Should We Rethink How Presentation Attack Detection is Evaluated?
Publication

Publications

Interpretable Biometrics: Should We Rethink How Presentation Attack Detection is Evaluated?

Title
Interpretable Biometrics: Should We Rethink How Presentation Attack Detection is Evaluated?
Type
Article in International Conference Proceedings Book
Year
2020
Authors
Ana F. Sequeira
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. View Authenticus page Without ORCID
Wilson Silva
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. View Authenticus page Without ORCID
João Ribeiro Pinto
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. View Authenticus page Without ORCID
Tiago Gonçalves
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Jaime S. Cardoso
(Author)
FEUP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page View ORCID page
Conference proceedings International
Pages: 1-6
8th International Workshop on Biometrics and Forensics, IWBF 2020
29 April 2020 through 30 April 2020
Other information
Authenticus ID: P-00S-C9S
Resumo (PT):
Abstract (EN): Presentation attack detection (PAD) methods are commonly evaluated using metrics based on the predicted labels. This is a limitation, especially for more elusive methods based on deep learning which can freely learn the most suitable features. Though often being more accurate, these models operate as complex black boxes which makes the inner processes that sustain their predictions still baffling. Interpretability tools are now being used to delve deeper into the operation of machine learning methods, especially artificial networks, to better understand how they reach their decisions. In this paper, we make a case for the integration of interpretability tools in the evaluation of PAD. A simple model for face PAD, based on convolutional neural networks, was implemented and evaluated using both traditional metrics (APCER, BPCER and EER) and interpretability tools (Grad-CAM), using data from the ROSE Youtu video collection. The results show that interpretability tools can capture more completely the intricate behavior of the implemented model, and enable the identification of certain properties that should be verified by a PAD method that is robust, coherent, meaningful, and can adequately generalize to unseen data and attacks. One can conclude that, with further efforts devoted towards higher objectivity in interpretability, this can be the key to obtain deeper and more thorough PAD performance evaluation setups. © 2020 IEEE.
Language: English
Type (Professor's evaluation): Scientific
No. of pages: 6
Documents
We could not find any documents associated to the publication.
Related Publications

Of the same authors

An exploratory study of interpretability for face presentation attack detection (2021)
Article in International Scientific Journal
Ana F. Sequeira; Tiago Gonçalves; Wilson Silva; João Ribeiro Pinto; Jaime S. Cardoso
Recommend this page Top
Copyright 1996-2025 © Faculdade de Direito da Universidade do Porto  I Terms and Conditions  I Acessibility  I Index A-Z
Page created on: 2025-08-16 at 17:24:42 | Privacy Policy | Personal Data Protection Policy | Whistleblowing