Go to:
Logótipo
Comuta visibilidade da coluna esquerda
Você está em: Start > Publications > View > Analyzing the Footprint of Classifiers in Adversarial Denial of Service Contexts
Publication

Publications

Analyzing the Footprint of Classifiers in Adversarial Denial of Service Contexts

Title
Analyzing the Footprint of Classifiers in Adversarial Denial of Service Contexts
Type
Article in International Conference Proceedings Book
Year
2019-08-30
Authors
Nuno Martins
(Author)
Other
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications Without AUTHENTICUS Without ORCID
José Magalhães Cruz
(Author)
FEUP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page View ORCID page
Pedro Henriques Abreu
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Tiago Cruz
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Conference proceedings International
Pages: 256-267
19th EPIA Conference on Artificial Intelligence, EPIA 2019
3 September 2019 through 6 September 2019
Other information
Resumo (PT):
Abstract (EN): Adversarial machine learning is an area of study that examines both the generation and detection of adversarial examples, which are inputs specially crafted to deceive classifiers, and has been extensively researched specifically in the area of image recognition, where humanly imperceptible modifications are performed on images that cause a classifier to perform incorrect predictions. The main objective of this paper is to study the behavior of multiple state of the art machine learning algorithms in an adversarial context. To perform this study, six different classification algorithms were used on two datasets, NSL-KDD and CICIDS2017, and four adversarial attack techniques were implemented with multiple perturbation magnitudes. Furthermore, the effectiveness of training the models with adversaries to improve recognition is also tested. The results show that adversarial attacks successfully deteriorate the performance of all the classifiers between 13% and 40%, with the Denoising Autoencoder being the technique with highest resilience to attacks.
Language: English
Type (Professor's evaluation): Scientific
No. of pages: 12
Documents
We could not find any documents associated to the publication with allowed access.
Related Publications

Of the same authors

Adversarial Machine Learning Applied to Intrusion and Malware Scenarios: A Systematic Review (2020)
Article in International Scientific Journal
Nuno Martins; José Magalhães Cruz; Tiago Cruz; Pedro Henriques Abreu
Recommend this page Top
Copyright 1996-2025 © Faculdade de Direito da Universidade do Porto  I Terms and Conditions  I Acessibility  I Index A-Z
Page created on: 2025-07-21 at 21:31:36 | Privacy Policy | Personal Data Protection Policy | Whistleblowing