Go to:
Logótipo
Comuta visibilidade da coluna esquerda
Você está em: Start > Publications > View > A robust fingerprint presentation attack detection method against unseen attacks through adversarial learning
Publication

Publications

A robust fingerprint presentation attack detection method against unseen attacks through adversarial learning

Title
A robust fingerprint presentation attack detection method against unseen attacks through adversarial learning
Type
Article in International Conference Proceedings Book
Year
2020
Authors
Pereira, JA
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Sequeira, AF
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. View Authenticus page Without ORCID
Jaime S Cardoso
(Author)
FEUP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page View ORCID page
Conference proceedings International
Pages: 183-190
19th International Conference of the Biometrics Special Interest Group, BIOSIG 2020
16 September 2020 through 18 September 2020
Other information
Authenticus ID: P-00S-ZMP
Abstract (EN): Fingerprint presentation attack detection (PAD) methods present a stunning performance in current literature. However, the fingerprint PAD generalisation problem is still an open challenge requiring the development of methods able to cope with sophisticated and unseen attacks as our eventual intruders become more capable. This work addresses this problem by applying a regularisation technique based on an adversarial training and representation learning specifically designed to to improve the PAD generalisation capacity of the model to an unseen attack. In the adopted approach, the model jointly learns the representation and the classifier from the data, while explicitly imposing invariance in the high-level representations regarding the type of attacks for a robust PAD. The application of the adversarial training methodology is evaluated in two different scenarios: i) a handcrafted feature extraction method combined with a Multilayer Perceptron (MLP); and ii) an end-to-end solution using a Convolutional Neural Network (CNN). The experimental results demonstrated that the adopted regularisation strategies equipped the neural networks with increased PAD robustness. The adversarial approach particularly improved the CNN models' capacity for attacks detection in the unseen-attack scenario, showing remarkable improved APCER error rates when compared to state-of-the-art methods in similar conditions.
Language: English
Type (Professor's evaluation): Scientific
No. of pages: 5
Documents
We could not find any documents associated to the publication.
Recommend this page Top
Copyright 1996-2025 © Faculdade de Direito da Universidade do Porto  I Terms and Conditions  I Acessibility  I Index A-Z
Page created on: 2025-07-20 at 13:01:46 | Privacy Policy | Personal Data Protection Policy | Whistleblowing