Cargando…

Global PAC Bounds for Learning Discrete Time Markov Chains

Learning models from observations of a system is a powerful tool with many applications. In this paper, we consider learning Discrete Time Markov Chains (DTMC), with different methods such as frequency estimation or Laplace smoothing. While models learnt with such methods converge asymptotically tow...

Descripción completa

Detalles Bibliográficos
Autores principales: Bazille, Hugo, Genest, Blaise, Jegourel, Cyrille, Sun, Jun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7363184/
http://dx.doi.org/10.1007/978-3-030-53291-8_17
_version_ 1783559619318120448
author Bazille, Hugo
Genest, Blaise
Jegourel, Cyrille
Sun, Jun
author_facet Bazille, Hugo
Genest, Blaise
Jegourel, Cyrille
Sun, Jun
author_sort Bazille, Hugo
collection PubMed
description Learning models from observations of a system is a powerful tool with many applications. In this paper, we consider learning Discrete Time Markov Chains (DTMC), with different methods such as frequency estimation or Laplace smoothing. While models learnt with such methods converge asymptotically towards the exact system, a more practical question in the realm of trusted machine learning is how accurate a model learnt with a limited time budget is. Existing approaches provide bounds on how close the model is to the original system, in terms of bounds on local (transition) probabilities, which has unclear implication on the global behavior. In this work, we provide global bounds on the error made by such a learning process, in terms of global behaviors formalized using temporal logic. More precisely, we propose a learning process ensuring a bound on the error in the probabilities of these properties. While such learning process cannot exist for the full LTL logic, we provide one ensuring a bound that is uniform over all the formulas of CTL. Further, given one time-to-failure property, we provide an improved learning algorithm. Interestingly, frequency estimation is sufficient for the latter, while Laplace smoothing is needed to ensure non-trivial uniform bounds for the full CTL logic.
format Online
Article
Text
id pubmed-7363184
institution National Center for Biotechnology Information
language English
publishDate 2020
record_format MEDLINE/PubMed
spelling pubmed-73631842020-07-16 Global PAC Bounds for Learning Discrete Time Markov Chains Bazille, Hugo Genest, Blaise Jegourel, Cyrille Sun, Jun Computer Aided Verification Article Learning models from observations of a system is a powerful tool with many applications. In this paper, we consider learning Discrete Time Markov Chains (DTMC), with different methods such as frequency estimation or Laplace smoothing. While models learnt with such methods converge asymptotically towards the exact system, a more practical question in the realm of trusted machine learning is how accurate a model learnt with a limited time budget is. Existing approaches provide bounds on how close the model is to the original system, in terms of bounds on local (transition) probabilities, which has unclear implication on the global behavior. In this work, we provide global bounds on the error made by such a learning process, in terms of global behaviors formalized using temporal logic. More precisely, we propose a learning process ensuring a bound on the error in the probabilities of these properties. While such learning process cannot exist for the full LTL logic, we provide one ensuring a bound that is uniform over all the formulas of CTL. Further, given one time-to-failure property, we provide an improved learning algorithm. Interestingly, frequency estimation is sufficient for the latter, while Laplace smoothing is needed to ensure non-trivial uniform bounds for the full CTL logic. 2020-06-16 /pmc/articles/PMC7363184/ http://dx.doi.org/10.1007/978-3-030-53291-8_17 Text en © The Author(s) 2020 Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
spellingShingle Article
Bazille, Hugo
Genest, Blaise
Jegourel, Cyrille
Sun, Jun
Global PAC Bounds for Learning Discrete Time Markov Chains
title Global PAC Bounds for Learning Discrete Time Markov Chains
title_full Global PAC Bounds for Learning Discrete Time Markov Chains
title_fullStr Global PAC Bounds for Learning Discrete Time Markov Chains
title_full_unstemmed Global PAC Bounds for Learning Discrete Time Markov Chains
title_short Global PAC Bounds for Learning Discrete Time Markov Chains
title_sort global pac bounds for learning discrete time markov chains
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7363184/
http://dx.doi.org/10.1007/978-3-030-53291-8_17
work_keys_str_mv AT bazillehugo globalpacboundsforlearningdiscretetimemarkovchains
AT genestblaise globalpacboundsforlearningdiscretetimemarkovchains
AT jegourelcyrille globalpacboundsforlearningdiscretetimemarkovchains
AT sunjun globalpacboundsforlearningdiscretetimemarkovchains