Cargando…

A Bayesian Network Approach to Explainable Reinforcement Learning with Distal Information

Nowadays, Artificial Intelligence systems have expanded their competence field from research to industry and daily life, so understanding how they make decisions is becoming fundamental to reducing the lack of trust between users and machines and increasing the transparency of the model. This paper...

Descripción completa

Detalles Bibliográficos
Autores principales: Milani, Rudy, Moll, Maximilian, De Leone, Renato, Pickl, Stefan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9961455/
https://www.ncbi.nlm.nih.gov/pubmed/36850617
http://dx.doi.org/10.3390/s23042013
_version_ 1784895758794752000
author Milani, Rudy
Moll, Maximilian
De Leone, Renato
Pickl, Stefan
author_facet Milani, Rudy
Moll, Maximilian
De Leone, Renato
Pickl, Stefan
author_sort Milani, Rudy
collection PubMed
description Nowadays, Artificial Intelligence systems have expanded their competence field from research to industry and daily life, so understanding how they make decisions is becoming fundamental to reducing the lack of trust between users and machines and increasing the transparency of the model. This paper aims to automate the generation of explanations for model-free Reinforcement Learning algorithms by answering “why” and “why not” questions. To this end, we use Bayesian Networks in combination with the NOTEARS algorithm for automatic structure learning. This approach complements an existing framework very well and demonstrates thus a step towards generating explanations with as little user input as possible. This approach is computationally evaluated in three benchmarks using different Reinforcement Learning methods to highlight that it is independent of the type of model used and the explanations are then rated through a human study. The results obtained are compared to other baseline explanation models to underline the satisfying performance of the framework presented in terms of increasing the understanding, transparency and trust in the action chosen by the agent.
format Online
Article
Text
id pubmed-9961455
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-99614552023-02-26 A Bayesian Network Approach to Explainable Reinforcement Learning with Distal Information Milani, Rudy Moll, Maximilian De Leone, Renato Pickl, Stefan Sensors (Basel) Article Nowadays, Artificial Intelligence systems have expanded their competence field from research to industry and daily life, so understanding how they make decisions is becoming fundamental to reducing the lack of trust between users and machines and increasing the transparency of the model. This paper aims to automate the generation of explanations for model-free Reinforcement Learning algorithms by answering “why” and “why not” questions. To this end, we use Bayesian Networks in combination with the NOTEARS algorithm for automatic structure learning. This approach complements an existing framework very well and demonstrates thus a step towards generating explanations with as little user input as possible. This approach is computationally evaluated in three benchmarks using different Reinforcement Learning methods to highlight that it is independent of the type of model used and the explanations are then rated through a human study. The results obtained are compared to other baseline explanation models to underline the satisfying performance of the framework presented in terms of increasing the understanding, transparency and trust in the action chosen by the agent. MDPI 2023-02-10 /pmc/articles/PMC9961455/ /pubmed/36850617 http://dx.doi.org/10.3390/s23042013 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Milani, Rudy
Moll, Maximilian
De Leone, Renato
Pickl, Stefan
A Bayesian Network Approach to Explainable Reinforcement Learning with Distal Information
title A Bayesian Network Approach to Explainable Reinforcement Learning with Distal Information
title_full A Bayesian Network Approach to Explainable Reinforcement Learning with Distal Information
title_fullStr A Bayesian Network Approach to Explainable Reinforcement Learning with Distal Information
title_full_unstemmed A Bayesian Network Approach to Explainable Reinforcement Learning with Distal Information
title_short A Bayesian Network Approach to Explainable Reinforcement Learning with Distal Information
title_sort bayesian network approach to explainable reinforcement learning with distal information
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9961455/
https://www.ncbi.nlm.nih.gov/pubmed/36850617
http://dx.doi.org/10.3390/s23042013
work_keys_str_mv AT milanirudy abayesiannetworkapproachtoexplainablereinforcementlearningwithdistalinformation
AT mollmaximilian abayesiannetworkapproachtoexplainablereinforcementlearningwithdistalinformation
AT deleonerenato abayesiannetworkapproachtoexplainablereinforcementlearningwithdistalinformation
AT picklstefan abayesiannetworkapproachtoexplainablereinforcementlearningwithdistalinformation
AT milanirudy bayesiannetworkapproachtoexplainablereinforcementlearningwithdistalinformation
AT mollmaximilian bayesiannetworkapproachtoexplainablereinforcementlearningwithdistalinformation
AT deleonerenato bayesiannetworkapproachtoexplainablereinforcementlearningwithdistalinformation
AT picklstefan bayesiannetworkapproachtoexplainablereinforcementlearningwithdistalinformation