Cargando…
Scenario-Based Verification of Uncertain MDPs
We consider Markov decision processes (MDPs) in which the transition probabilities and rewards belong to an uncertainty set parametrized by a collection of random variables. The probability distributions for these random parameters are unknown. The problem is to compute the probability to satisfy a...
Autores principales: | Cubuktepe, Murat, Jansen, Nils, Junges, Sebastian, Katoen, Joost-Pieter, Topcu, Ufuk |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7402411/ https://www.ncbi.nlm.nih.gov/pubmed/32754724 http://dx.doi.org/10.1007/978-3-030-45190-5_16 |
Ejemplares similares
-
PrIC3: Property Directed Reachability for MDPs
por: Batz, Kevin, et al.
Publicado: (2020) -
Simple Strategies in Multi-Objective MDPs
por: Delgrange, Florent, et al.
Publicado: (2020) -
Learning and Planning for Time-Varying MDPs Using Maximum Likelihood Estimation
por: Ornik, Melkior, et al.
Publicado: (2021) -
Finding Provably Optimal Markov Chains
por: Spel, Jip, et al.
Publicado: (2021) -
Inductive Synthesis for Probabilistic Programs Reaches New Horizons
por: Andriushchenko, Roman, et al.
Publicado: (2021)