Resumo (PT):
Abstract (EN):
Adversarial machine learning is an area of study that examines both the generation and detection of adversarial examples, which are
inputs specially crafted to deceive classifiers, and has been extensively
researched specifically in the area of image recognition, where humanly
imperceptible modifications are performed on images that cause a classifier to perform incorrect predictions.
The main objective of this paper is to study the behavior of multiple
state of the art machine learning algorithms in an adversarial context.
To perform this study, six different classification algorithms were used on
two datasets, NSL-KDD and CICIDS2017, and four adversarial attack
techniques were implemented with multiple perturbation magnitudes.
Furthermore, the effectiveness of training the models with adversaries
to improve recognition is also tested. The results show that adversarial attacks successfully deteriorate the performance of all the classifiers
between 13% and 40%, with the Denoising Autoencoder being the technique with highest resilience to attacks.
Language:
English
Type (Professor's evaluation):
Scientific
No. of pages:
12