Go to:
Logótipo
Você está em: Start > Publications > View > Parallel Asynchronous Strategies for the Execution of Feature Selection Algorithms
Publication

Parallel Asynchronous Strategies for the Execution of Feature Selection Algorithms

Title
Parallel Asynchronous Strategies for the Execution of Feature Selection Algorithms
Type
Article in International Scientific Journal
Year
2018
Authors
Jorge Silva
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. View Authenticus page Without ORCID
Journal
Vol. 46 No. 2
Pages: 252-283
ISSN: 0885-7458
Publisher: Springer Nature
Other information
Authenticus ID: P-00N-9RM
Abstract (EN): Reducing the dimensionality of datasets is a fundamental step in the task of building a classification model. Feature selection is the process of selecting a smaller subset of features from the original one in order to enhance the performance of the classification model. The problem is known to be NP-hard, and despite the existence of several algorithms there is not one that outperforms the others in all scenarios. Due to the complexity of the problem usually feature selection algorithms have to compromise the quality of their solutions in order to execute in a practicable amount of time. Parallel computing techniques emerge as a potential solution to tackle this problem. There are several approaches that already execute feature selection in parallel resorting to synchronous models. These are preferred due to their simplicity and capability to use with any feature selection algorithm. However, synchronous models implement pausing points during the execution flow, which decrease the parallel performance. In this paper, we discuss the challenges of executing feature selection algorithms in parallel using asynchronous models, and present a feature selection algorithm that favours these models. Furthermore, we present two strategies for an asynchronous parallel execution not only of our algorithm but of any other feature selection approach. The first strategy solves the problem using the distributed memory paradigm, while the second exploits the use of shared memory. We evaluate the parallel performance of our strategies using up to 32 cores. The results show near linear speedups for both strategies, with the shared memory strategy outperforming the distributed one. Additionally, we provide an example of adapting our strategies to execute the Sequential forward Search asynchronously. We further test this version versus a synchronous one. Our results revealed that, by using an asynchronous strategy, we are able to save an average of 7.5% of the execution time.
Language: English
Type (Professor's evaluation): Scientific
No. of pages: 32
Documents
We could not find any documents associated to the publication.
Related Publications

Of the same journal

Special Issue on High-Level Parallel Programming and Applications (2022)
Another Publication in an International Scientific Journal
Jorge Manuel Gomes Barbosa; Ines Dutra; Miguel Areias
Relational Learning with GPUs: Accelerating Rule Coverage (2016)
Article in International Scientific Journal
Alberto Martinez Angeles, CA; Wu, HC; Ines Dutra; Costa, VS; Buenabad Chavez, J
LALP: A Language to Program Custom FPGA-Based Acceleration Engines (2012)
Article in International Scientific Journal
Menotti, R; João M. P. Cardoso; Fernandes, MM; Marques, E
A Lock-Free Hash Trie Design for Concurrent Tabled Logic Programs (2016)
Article in International Scientific Journal
Miguel Areias; Ricardo Rocha
Recommend this page Top
Copyright 1996-2024 © Faculdade de Arquitectura da Universidade do Porto  I Terms and Conditions  I Acessibility  I Index A-Z  I Guest Book
Page created on: 2024-11-08 at 07:17:10 | Acceptable Use Policy | Data Protection Policy | Complaint Portal