Cargando…
Prediction, Knowledge, and Explainability: Examining the Use of General Value Functions in Machine Knowledge
Within computational reinforcement learning, a growing body of work seeks to express an agent's knowledge of its world through large collections of predictions. While systems that encode predictions as General Value Functions (GVFs) have seen numerous developments in both theory and application...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9010283/ https://www.ncbi.nlm.nih.gov/pubmed/35434609 http://dx.doi.org/10.3389/frai.2022.826724 |
_version_ | 1784687452581003264 |
---|---|
author | Kearney, Alex Günther, Johannes Pilarski, Patrick M. |
author_facet | Kearney, Alex Günther, Johannes Pilarski, Patrick M. |
author_sort | Kearney, Alex |
collection | PubMed |
description | Within computational reinforcement learning, a growing body of work seeks to express an agent's knowledge of its world through large collections of predictions. While systems that encode predictions as General Value Functions (GVFs) have seen numerous developments in both theory and application, whether such approaches are explainable is unexplored. In this perspective piece, we explore GVFs as a form of explainable AI. To do so, we articulate a subjective agent-centric approach to explainability in sequential decision-making tasks. We propose that prior to explaining its decisions to others, an self-supervised agent must be able to introspectively explain decisions to itself. To clarify this point, we review prior applications of GVFs that involve human-agent collaboration. In doing so, we demonstrate that by making their subjective explanations public, predictive knowledge agents can improve the clarity of their operation in collaborative tasks. |
format | Online Article Text |
id | pubmed-9010283 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-90102832022-04-16 Prediction, Knowledge, and Explainability: Examining the Use of General Value Functions in Machine Knowledge Kearney, Alex Günther, Johannes Pilarski, Patrick M. Front Artif Intell Artificial Intelligence Within computational reinforcement learning, a growing body of work seeks to express an agent's knowledge of its world through large collections of predictions. While systems that encode predictions as General Value Functions (GVFs) have seen numerous developments in both theory and application, whether such approaches are explainable is unexplored. In this perspective piece, we explore GVFs as a form of explainable AI. To do so, we articulate a subjective agent-centric approach to explainability in sequential decision-making tasks. We propose that prior to explaining its decisions to others, an self-supervised agent must be able to introspectively explain decisions to itself. To clarify this point, we review prior applications of GVFs that involve human-agent collaboration. In doing so, we demonstrate that by making their subjective explanations public, predictive knowledge agents can improve the clarity of their operation in collaborative tasks. Frontiers Media S.A. 2022-03-31 /pmc/articles/PMC9010283/ /pubmed/35434609 http://dx.doi.org/10.3389/frai.2022.826724 Text en Copyright © 2022 Kearney, Günther and Pilarski. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Artificial Intelligence Kearney, Alex Günther, Johannes Pilarski, Patrick M. Prediction, Knowledge, and Explainability: Examining the Use of General Value Functions in Machine Knowledge |
title | Prediction, Knowledge, and Explainability: Examining the Use of General Value Functions in Machine Knowledge |
title_full | Prediction, Knowledge, and Explainability: Examining the Use of General Value Functions in Machine Knowledge |
title_fullStr | Prediction, Knowledge, and Explainability: Examining the Use of General Value Functions in Machine Knowledge |
title_full_unstemmed | Prediction, Knowledge, and Explainability: Examining the Use of General Value Functions in Machine Knowledge |
title_short | Prediction, Knowledge, and Explainability: Examining the Use of General Value Functions in Machine Knowledge |
title_sort | prediction, knowledge, and explainability: examining the use of general value functions in machine knowledge |
topic | Artificial Intelligence |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9010283/ https://www.ncbi.nlm.nih.gov/pubmed/35434609 http://dx.doi.org/10.3389/frai.2022.826724 |
work_keys_str_mv | AT kearneyalex predictionknowledgeandexplainabilityexaminingtheuseofgeneralvaluefunctionsinmachineknowledge AT guntherjohannes predictionknowledgeandexplainabilityexaminingtheuseofgeneralvaluefunctionsinmachineknowledge AT pilarskipatrickm predictionknowledgeandexplainabilityexaminingtheuseofgeneralvaluefunctionsinmachineknowledge |