Cargando…

Taming the Chaos in Neural Network Time Series Predictions

Machine learning methods, such as Long Short-Term Memory (LSTM) neural networks can predict real-life time series data. Here, we present a new approach to predict time series data combining interpolation techniques, randomly parameterized LSTM neural networks and measures of signal complexity, which...

Descripción completa

Detalles Bibliográficos
Autores principales: Raubitzek, Sebastian, Neubauer, Thomas
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8622738/
https://www.ncbi.nlm.nih.gov/pubmed/34828122
http://dx.doi.org/10.3390/e23111424
_version_ 1784605764395991040
author Raubitzek, Sebastian
Neubauer, Thomas
author_facet Raubitzek, Sebastian
Neubauer, Thomas
author_sort Raubitzek, Sebastian
collection PubMed
description Machine learning methods, such as Long Short-Term Memory (LSTM) neural networks can predict real-life time series data. Here, we present a new approach to predict time series data combining interpolation techniques, randomly parameterized LSTM neural networks and measures of signal complexity, which we will refer to as complexity measures throughout this research. First, we interpolate the time series data under study. Next, we predict the time series data using an ensemble of randomly parameterized LSTM neural networks. Finally, we filter the ensemble prediction based on the original data complexity to improve the predictability, i.e., we keep only predictions with a complexity close to that of the training data. We test the proposed approach on five different univariate time series data. We use linear and fractal interpolation to increase the amount of data. We tested five different complexity measures for the ensemble filters for time series data, i.e., the Hurst exponent, Shannon’s entropy, Fisher’s information, SVD entropy, and the spectrum of Lyapunov exponents. Our results show that the interpolated predictions consistently outperformed the non-interpolated ones. The best ensemble predictions always beat a baseline prediction based on a neural network with only a single hidden LSTM, gated recurrent unit (GRU) or simple recurrent neural network (RNN) layer. The complexity filters can reduce the error of a random ensemble prediction by a factor of 10. Further, because we use randomly parameterized neural networks, no hyperparameter tuning is required. We prove this method useful for real-time time series prediction because the optimization of hyperparameters, which is usually very costly and time-intensive, can be circumvented with the presented approach.
format Online
Article
Text
id pubmed-8622738
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-86227382021-11-27 Taming the Chaos in Neural Network Time Series Predictions Raubitzek, Sebastian Neubauer, Thomas Entropy (Basel) Article Machine learning methods, such as Long Short-Term Memory (LSTM) neural networks can predict real-life time series data. Here, we present a new approach to predict time series data combining interpolation techniques, randomly parameterized LSTM neural networks and measures of signal complexity, which we will refer to as complexity measures throughout this research. First, we interpolate the time series data under study. Next, we predict the time series data using an ensemble of randomly parameterized LSTM neural networks. Finally, we filter the ensemble prediction based on the original data complexity to improve the predictability, i.e., we keep only predictions with a complexity close to that of the training data. We test the proposed approach on five different univariate time series data. We use linear and fractal interpolation to increase the amount of data. We tested five different complexity measures for the ensemble filters for time series data, i.e., the Hurst exponent, Shannon’s entropy, Fisher’s information, SVD entropy, and the spectrum of Lyapunov exponents. Our results show that the interpolated predictions consistently outperformed the non-interpolated ones. The best ensemble predictions always beat a baseline prediction based on a neural network with only a single hidden LSTM, gated recurrent unit (GRU) or simple recurrent neural network (RNN) layer. The complexity filters can reduce the error of a random ensemble prediction by a factor of 10. Further, because we use randomly parameterized neural networks, no hyperparameter tuning is required. We prove this method useful for real-time time series prediction because the optimization of hyperparameters, which is usually very costly and time-intensive, can be circumvented with the presented approach. MDPI 2021-10-28 /pmc/articles/PMC8622738/ /pubmed/34828122 http://dx.doi.org/10.3390/e23111424 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Raubitzek, Sebastian
Neubauer, Thomas
Taming the Chaos in Neural Network Time Series Predictions
title Taming the Chaos in Neural Network Time Series Predictions
title_full Taming the Chaos in Neural Network Time Series Predictions
title_fullStr Taming the Chaos in Neural Network Time Series Predictions
title_full_unstemmed Taming the Chaos in Neural Network Time Series Predictions
title_short Taming the Chaos in Neural Network Time Series Predictions
title_sort taming the chaos in neural network time series predictions
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8622738/
https://www.ncbi.nlm.nih.gov/pubmed/34828122
http://dx.doi.org/10.3390/e23111424
work_keys_str_mv AT raubitzeksebastian tamingthechaosinneuralnetworktimeseriespredictions
AT neubauerthomas tamingthechaosinneuralnetworktimeseriespredictions