Go to:
Logótipo
Você está em: Start > Publications > View > Cherry-Picking in Time Series Forecasting: How to Select Datasets to Make Your Model Shine
Map of Premises
Principal
Publication

Cherry-Picking in Time Series Forecasting: How to Select Datasets to Make Your Model Shine

Title
Cherry-Picking in Time Series Forecasting: How to Select Datasets to Make Your Model Shine
Type
Article in International Conference Proceedings Book
Year
2025
Authors
Roque, L
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Soares, C
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Torgo, L
(Author)
Other
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page Without ORCID
Vitor Cerqueira
(Author)
FEUP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page View ORCID page
Carlos Soares
(Author)
FEUP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page View ORCID page
Conference proceedings International
39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025
Philadelphia, 25 February 2025 through 4 March 2025
Indexing
Publicação em Scopus Scopus - 0 Citations
Other information
Authenticus ID: P-017-YF6
Abstract (EN): The importance of time series forecasting drives continuous research and the development of new approaches to tackle this problem. Typically, these methods are introduced through empirical studies that frequently claim superior accuracy for the proposed approaches. Nevertheless, concerns are rising about the reliability and generalizability of these results due to limitations in experimental setups. This paper addresses a critical limitation: the number and representativeness of the datasets used. We investigate the impact of dataset selection bias, particularly the practice of cherry-picking datasets, on the performance evaluation of forecasting methods. Through empirical analysis with a diverse set of benchmark datasets, our findings reveal that cherry-picking datasets can significantly distort the perceived performance of methods, often exaggerating their effectiveness. Furthermore, our results demonstrate that by selectively choosing just four datasets ¿ what most studies report ¿ 46% of methods could be deemed best in class, and 77% could rank within the top three. Additionally, recent deep learning-based approaches show high sensitivity to dataset selection, whereas classical methods exhibit greater robustness. Finally, our results indicate that, when empirically validating forecasting algorithms on a subset of the benchmarks, increasing the number of datasets tested from 3 to 6 reduces the risk of incorrectly identifying an algorithm as the best one by approximately 40%. Our study highlights the critical need for comprehensive evaluation frameworks that more accurately reflect real-world scenarios. Adopting such frameworks will ensure the development of robust and reliable forecasting methods. Copyright © 2025, Association for the Advancement of Artificia Intelligence (www.aaai.org). All rights reserved.
Language: English
Type (Professor's evaluation): Scientific
No. of pages: 7
Documents
We could not find any documents associated to the publication.
Recommend this page Top
Copyright 1996-2025 © Faculdade de Medicina Dentária da Universidade do Porto  I Terms and Conditions  I Acessibility  I Index A-Z
Page created on: 2025-07-21 at 16:10:11 | Privacy Policy | Personal Data Protection Policy | Whistleblowing | Electronic Yellow Book