Cargando…
Efficient inference in state-space models through adaptive learning in online Monte Carlo expectation maximization
Expectation maximization (EM) is a technique for estimating maximum-likelihood parameters of a latent variable model given observed data by alternating between taking expectations of sufficient statistics, and maximizing the expected log likelihood. For situations where sufficient statistics are int...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Berlin Heidelberg
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7382664/ https://www.ncbi.nlm.nih.gov/pubmed/32764847 http://dx.doi.org/10.1007/s00180-019-00937-4 |
_version_ | 1783563290094338048 |
---|---|
author | Henderson, Donna Lunter, Gerton |
author_facet | Henderson, Donna Lunter, Gerton |
author_sort | Henderson, Donna |
collection | PubMed |
description | Expectation maximization (EM) is a technique for estimating maximum-likelihood parameters of a latent variable model given observed data by alternating between taking expectations of sufficient statistics, and maximizing the expected log likelihood. For situations where sufficient statistics are intractable, stochastic approximation EM (SAEM) is often used, which uses Monte Carlo techniques to approximate the expected log likelihood. Two common implementations of SAEM, Batch EM (BEM) and online EM (OEM), are parameterized by a “learning rate”, and their efficiency depend strongly on this parameter. We propose an extension to the OEM algorithm, termed Introspective Online Expectation Maximization (IOEM), which removes the need for specifying this parameter by adapting the learning rate to trends in the parameter updates. We show that our algorithm matches the efficiency of the optimal BEM and OEM algorithms in multiple models, and that the efficiency of IOEM can exceed that of BEM/OEM methods with optimal learning rates when the model has many parameters. Finally we use IOEM to fit two models to a financial time series. A Python implementation is available at https://github.com/luntergroup/IOEM.git. |
format | Online Article Text |
id | pubmed-7382664 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | Springer Berlin Heidelberg |
record_format | MEDLINE/PubMed |
spelling | pubmed-73826642020-08-04 Efficient inference in state-space models through adaptive learning in online Monte Carlo expectation maximization Henderson, Donna Lunter, Gerton Comput Stat Original Paper Expectation maximization (EM) is a technique for estimating maximum-likelihood parameters of a latent variable model given observed data by alternating between taking expectations of sufficient statistics, and maximizing the expected log likelihood. For situations where sufficient statistics are intractable, stochastic approximation EM (SAEM) is often used, which uses Monte Carlo techniques to approximate the expected log likelihood. Two common implementations of SAEM, Batch EM (BEM) and online EM (OEM), are parameterized by a “learning rate”, and their efficiency depend strongly on this parameter. We propose an extension to the OEM algorithm, termed Introspective Online Expectation Maximization (IOEM), which removes the need for specifying this parameter by adapting the learning rate to trends in the parameter updates. We show that our algorithm matches the efficiency of the optimal BEM and OEM algorithms in multiple models, and that the efficiency of IOEM can exceed that of BEM/OEM methods with optimal learning rates when the model has many parameters. Finally we use IOEM to fit two models to a financial time series. A Python implementation is available at https://github.com/luntergroup/IOEM.git. Springer Berlin Heidelberg 2019-12-03 2020 /pmc/articles/PMC7382664/ /pubmed/32764847 http://dx.doi.org/10.1007/s00180-019-00937-4 Text en © The Author(s) 2019 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. |
spellingShingle | Original Paper Henderson, Donna Lunter, Gerton Efficient inference in state-space models through adaptive learning in online Monte Carlo expectation maximization |
title | Efficient inference in state-space models through adaptive learning in online Monte Carlo expectation maximization |
title_full | Efficient inference in state-space models through adaptive learning in online Monte Carlo expectation maximization |
title_fullStr | Efficient inference in state-space models through adaptive learning in online Monte Carlo expectation maximization |
title_full_unstemmed | Efficient inference in state-space models through adaptive learning in online Monte Carlo expectation maximization |
title_short | Efficient inference in state-space models through adaptive learning in online Monte Carlo expectation maximization |
title_sort | efficient inference in state-space models through adaptive learning in online monte carlo expectation maximization |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7382664/ https://www.ncbi.nlm.nih.gov/pubmed/32764847 http://dx.doi.org/10.1007/s00180-019-00937-4 |
work_keys_str_mv | AT hendersondonna efficientinferenceinstatespacemodelsthroughadaptivelearninginonlinemontecarloexpectationmaximization AT luntergerton efficientinferenceinstatespacemodelsthroughadaptivelearninginonlinemontecarloexpectationmaximization |