Go to:
Logótipo
Você está em: Start > Publications > View > Selecting classification algorithms with active testing on similar datasets
Map of Premises
Principal
Publication

Selecting classification algorithms with active testing on similar datasets

Title
Selecting classification algorithms with active testing on similar datasets
Type
Article in International Conference Proceedings Book
Year
2012
Authors
Rui Leite
(Author)
FEP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page View ORCID page
Pavel Brazdil
(Author)
FEP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page Without ORCID
Vanschoren, J
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Conference proceedings International
Pages: 20-27
Workshop on Ubiquitous Data Mining, UDM 2012 - In Conjunction with the 20th European Conference on Artificial Intelligence, ECAI 2012
27 August 2012 through 31 August 2012
Indexing
Other information
Authenticus ID: P-00K-SPX
Abstract (EN): Given the large amount of data mining algorithms, their combinations (e.g. ensembles) and possible parameter settings, finding the most adequate method to analyze a new dataset becomes an ever more challenging task. This is because in many cases testing all possibly useful alternatives quickly becomes prohibitively expensive. In this paper we propose a novel technique, called active testing, that intelligently selects the most useful cross-validation tests. It proceeds in a tournament-style fashion, in each round selecting and testing the algorithm that is most likely to outperform the best algorithm of the previous round on the new dataset. This 'most promising' competitor is chosen based on a history of prior duels between both algorithms on similar datasets. Each new cross-validation test will contribute information to a better estimate of dataset similarity, and thus better predict which algorithms are most promising on the new dataset. We also follow a different path to estimate dataset similarity based on data characteristics. We have evaluated this approach using a set of 292 algorithm-parameter combinations on 76 UCI datasets for classification. The results show that active testing will quickly yield an algorithm whose performance is very close to the optimum, after relatively few tests. It also provides a better solution than previously proposed methods. The variants of our method that rely on crossvalidation tests to estimate dataset similarity provides better solutions than those that rely on data characteristics.
Language: English
Type (Professor's evaluation): Scientific
Documents
We could not find any documents associated to the publication.
Recommend this page Top
Copyright 1996-2025 © Faculdade de Medicina Dentária da Universidade do Porto  I Terms and Conditions  I Acessibility  I Index A-Z
Page created on: 2025-07-15 at 02:23:38 | Privacy Policy | Personal Data Protection Policy | Whistleblowing | Electronic Yellow Book