Cargando…

History Marginalization Improves Forecasting in Variational Recurrent Neural Networks

Deep probabilistic time series forecasting models have become an integral part of machine learning. While several powerful generative models have been proposed, we provide evidence that their associated inference models are oftentimes too limited and cause the generative model to predict mode-averag...

Descripción completa

Detalles Bibliográficos
Autores principales: Qiu, Chen, Mandt, Stephan, Rudolph, Maja
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8700018/
https://www.ncbi.nlm.nih.gov/pubmed/34945869
http://dx.doi.org/10.3390/e23121563
_version_ 1784620654636564480
author Qiu, Chen
Mandt, Stephan
Rudolph, Maja
author_facet Qiu, Chen
Mandt, Stephan
Rudolph, Maja
author_sort Qiu, Chen
collection PubMed
description Deep probabilistic time series forecasting models have become an integral part of machine learning. While several powerful generative models have been proposed, we provide evidence that their associated inference models are oftentimes too limited and cause the generative model to predict mode-averaged dynamics. Mode-averaging is problematic since many real-world sequences are highly multi-modal, and their averaged dynamics are unphysical (e.g., predicted taxi trajectories might run through buildings on the street map). To better capture multi-modality, we develop variational dynamic mixtures (VDM): a new variational family to infer sequential latent variables. The VDM approximate posterior at each time step is a mixture density network, whose parameters come from propagating multiple samples through a recurrent architecture. This results in an expressive multi-modal posterior approximation. In an empirical study, we show that VDM outperforms competing approaches on highly multi-modal datasets from different domains.
format Online
Article
Text
id pubmed-8700018
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-87000182021-12-24 History Marginalization Improves Forecasting in Variational Recurrent Neural Networks Qiu, Chen Mandt, Stephan Rudolph, Maja Entropy (Basel) Article Deep probabilistic time series forecasting models have become an integral part of machine learning. While several powerful generative models have been proposed, we provide evidence that their associated inference models are oftentimes too limited and cause the generative model to predict mode-averaged dynamics. Mode-averaging is problematic since many real-world sequences are highly multi-modal, and their averaged dynamics are unphysical (e.g., predicted taxi trajectories might run through buildings on the street map). To better capture multi-modality, we develop variational dynamic mixtures (VDM): a new variational family to infer sequential latent variables. The VDM approximate posterior at each time step is a mixture density network, whose parameters come from propagating multiple samples through a recurrent architecture. This results in an expressive multi-modal posterior approximation. In an empirical study, we show that VDM outperforms competing approaches on highly multi-modal datasets from different domains. MDPI 2021-11-24 /pmc/articles/PMC8700018/ /pubmed/34945869 http://dx.doi.org/10.3390/e23121563 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Qiu, Chen
Mandt, Stephan
Rudolph, Maja
History Marginalization Improves Forecasting in Variational Recurrent Neural Networks
title History Marginalization Improves Forecasting in Variational Recurrent Neural Networks
title_full History Marginalization Improves Forecasting in Variational Recurrent Neural Networks
title_fullStr History Marginalization Improves Forecasting in Variational Recurrent Neural Networks
title_full_unstemmed History Marginalization Improves Forecasting in Variational Recurrent Neural Networks
title_short History Marginalization Improves Forecasting in Variational Recurrent Neural Networks
title_sort history marginalization improves forecasting in variational recurrent neural networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8700018/
https://www.ncbi.nlm.nih.gov/pubmed/34945869
http://dx.doi.org/10.3390/e23121563
work_keys_str_mv AT qiuchen historymarginalizationimprovesforecastinginvariationalrecurrentneuralnetworks
AT mandtstephan historymarginalizationimprovesforecastinginvariationalrecurrentneuralnetworks
AT rudolphmaja historymarginalizationimprovesforecastinginvariationalrecurrentneuralnetworks