Cargando…

Forecast evaluation for data scientists: common pitfalls and best practices

Recent trends in the Machine Learning (ML) and in particular Deep Learning (DL) domains have demonstrated that with the availability of massive amounts of time series, ML and DL techniques are competitive in time series forecasting. Nevertheless, the different forms of non-stationarities associated...

Descripción completa

Detalles Bibliográficos
Autores principales: Hewamalage, Hansika, Ackermann, Klaus, Bergmeir, Christoph
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9718476/
https://www.ncbi.nlm.nih.gov/pubmed/36504672
http://dx.doi.org/10.1007/s10618-022-00894-5
_version_ 1784843099778842624
author Hewamalage, Hansika
Ackermann, Klaus
Bergmeir, Christoph
author_facet Hewamalage, Hansika
Ackermann, Klaus
Bergmeir, Christoph
author_sort Hewamalage, Hansika
collection PubMed
description Recent trends in the Machine Learning (ML) and in particular Deep Learning (DL) domains have demonstrated that with the availability of massive amounts of time series, ML and DL techniques are competitive in time series forecasting. Nevertheless, the different forms of non-stationarities associated with time series challenge the capabilities of data-driven ML models. Furthermore, due to the domain of forecasting being fostered mainly by statisticians and econometricians over the years, the concepts related to forecast evaluation are not the mainstream knowledge among ML researchers. We demonstrate in our work that as a consequence, ML researchers oftentimes adopt flawed evaluation practices which results in spurious conclusions suggesting methods that are not competitive in reality to be seemingly competitive. Therefore, in this work we provide a tutorial-like compilation of the details associated with forecast evaluation. This way, we intend to impart the information associated with forecast evaluation to fit the context of ML, as means of bridging the knowledge gap between traditional methods of forecasting and adopting current state-of-the-art ML techniques.We elaborate the details of the different problematic characteristics of time series such as non-normality and non-stationarities and how they are associated with common pitfalls in forecast evaluation. Best practices in forecast evaluation are outlined with respect to the different steps such as data partitioning, error calculation, statistical testing, and others. Further guidelines are also provided along selecting valid and suitable error measures depending on the specific characteristics of the dataset at hand.
format Online
Article
Text
id pubmed-9718476
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-97184762022-12-05 Forecast evaluation for data scientists: common pitfalls and best practices Hewamalage, Hansika Ackermann, Klaus Bergmeir, Christoph Data Min Knowl Discov Article Recent trends in the Machine Learning (ML) and in particular Deep Learning (DL) domains have demonstrated that with the availability of massive amounts of time series, ML and DL techniques are competitive in time series forecasting. Nevertheless, the different forms of non-stationarities associated with time series challenge the capabilities of data-driven ML models. Furthermore, due to the domain of forecasting being fostered mainly by statisticians and econometricians over the years, the concepts related to forecast evaluation are not the mainstream knowledge among ML researchers. We demonstrate in our work that as a consequence, ML researchers oftentimes adopt flawed evaluation practices which results in spurious conclusions suggesting methods that are not competitive in reality to be seemingly competitive. Therefore, in this work we provide a tutorial-like compilation of the details associated with forecast evaluation. This way, we intend to impart the information associated with forecast evaluation to fit the context of ML, as means of bridging the knowledge gap between traditional methods of forecasting and adopting current state-of-the-art ML techniques.We elaborate the details of the different problematic characteristics of time series such as non-normality and non-stationarities and how they are associated with common pitfalls in forecast evaluation. Best practices in forecast evaluation are outlined with respect to the different steps such as data partitioning, error calculation, statistical testing, and others. Further guidelines are also provided along selecting valid and suitable error measures depending on the specific characteristics of the dataset at hand. Springer US 2022-12-02 2023 /pmc/articles/PMC9718476/ /pubmed/36504672 http://dx.doi.org/10.1007/s10618-022-00894-5 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Hewamalage, Hansika
Ackermann, Klaus
Bergmeir, Christoph
Forecast evaluation for data scientists: common pitfalls and best practices
title Forecast evaluation for data scientists: common pitfalls and best practices
title_full Forecast evaluation for data scientists: common pitfalls and best practices
title_fullStr Forecast evaluation for data scientists: common pitfalls and best practices
title_full_unstemmed Forecast evaluation for data scientists: common pitfalls and best practices
title_short Forecast evaluation for data scientists: common pitfalls and best practices
title_sort forecast evaluation for data scientists: common pitfalls and best practices
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9718476/
https://www.ncbi.nlm.nih.gov/pubmed/36504672
http://dx.doi.org/10.1007/s10618-022-00894-5
work_keys_str_mv AT hewamalagehansika forecastevaluationfordatascientistscommonpitfallsandbestpractices
AT ackermannklaus forecastevaluationfordatascientistscommonpitfallsandbestpractices
AT bergmeirchristoph forecastevaluationfordatascientistscommonpitfallsandbestpractices