Abstract (EN):
Deep learning models are widely used in multivariate time series forecasting, yet, they have high computational costs. One way to reduce this cost is by reducing data dimensionality, which involves removing unimportant or low importance information with the proper method. This work presents a study on an explainability feature selection framework composed of four methods (IMV-LSTM Tensor, LIME-LSTM, Average SHAP-LSTM, and Instance SHAP-LSTM) aimed at using the LSTM black-box model complexity to its favour, with the end goal of improving the error metrics and reducing the computational cost on a forecast task. To test the framework, three datasets with a total of 101 multivariate time series were used, with the explainability methods outperforming the baseline methods in most of the data, be it in error metrics or computation time for the LSTM model training.
Idioma:
Inglês
Tipo (Avaliação Docente):
Científica
Nº de páginas:
14