Cargando…
Self reward design with fine-grained interpretability
The black-box nature of deep neural networks (DNN) has brought to attention the issues of transparency and fairness. Deep Reinforcement Learning (Deep RL or DRL), which uses DNN to learn its policy, value functions etc, is thus also subject to similar concerns. This paper proposes a way to circumven...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9886969/ https://www.ncbi.nlm.nih.gov/pubmed/36717641 http://dx.doi.org/10.1038/s41598-023-28804-9 |
_version_ | 1784880235334860800 |
---|---|
author | Tjoa, Erico Guan, Cuntai |
author_facet | Tjoa, Erico Guan, Cuntai |
author_sort | Tjoa, Erico |
collection | PubMed |
description | The black-box nature of deep neural networks (DNN) has brought to attention the issues of transparency and fairness. Deep Reinforcement Learning (Deep RL or DRL), which uses DNN to learn its policy, value functions etc, is thus also subject to similar concerns. This paper proposes a way to circumvent the issues through the bottom-up design of neural networks with detailed interpretability, where each neuron or layer has its own meaning and utility that corresponds to humanly understandable concept. The framework introduced in this paper is called the Self Reward Design (SRD), inspired by the Inverse Reward Design, and this interpretable design can (1) solve the problem by pure design (although imperfectly) and (2) be optimized like a standard DNN. With deliberate human designs, we show that some RL problems such as lavaland and MuJoCo can be solved using a model constructed with standard NN components with few parameters. Furthermore, with our fish sale auction example, we demonstrate how SRD is used to address situations that will not make sense if black-box models are used, where humanly-understandable semantic-based decision is required. |
format | Online Article Text |
id | pubmed-9886969 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-98869692023-02-01 Self reward design with fine-grained interpretability Tjoa, Erico Guan, Cuntai Sci Rep Article The black-box nature of deep neural networks (DNN) has brought to attention the issues of transparency and fairness. Deep Reinforcement Learning (Deep RL or DRL), which uses DNN to learn its policy, value functions etc, is thus also subject to similar concerns. This paper proposes a way to circumvent the issues through the bottom-up design of neural networks with detailed interpretability, where each neuron or layer has its own meaning and utility that corresponds to humanly understandable concept. The framework introduced in this paper is called the Self Reward Design (SRD), inspired by the Inverse Reward Design, and this interpretable design can (1) solve the problem by pure design (although imperfectly) and (2) be optimized like a standard DNN. With deliberate human designs, we show that some RL problems such as lavaland and MuJoCo can be solved using a model constructed with standard NN components with few parameters. Furthermore, with our fish sale auction example, we demonstrate how SRD is used to address situations that will not make sense if black-box models are used, where humanly-understandable semantic-based decision is required. Nature Publishing Group UK 2023-01-30 /pmc/articles/PMC9886969/ /pubmed/36717641 http://dx.doi.org/10.1038/s41598-023-28804-9 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Tjoa, Erico Guan, Cuntai Self reward design with fine-grained interpretability |
title | Self reward design with fine-grained interpretability |
title_full | Self reward design with fine-grained interpretability |
title_fullStr | Self reward design with fine-grained interpretability |
title_full_unstemmed | Self reward design with fine-grained interpretability |
title_short | Self reward design with fine-grained interpretability |
title_sort | self reward design with fine-grained interpretability |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9886969/ https://www.ncbi.nlm.nih.gov/pubmed/36717641 http://dx.doi.org/10.1038/s41598-023-28804-9 |
work_keys_str_mv | AT tjoaerico selfrewarddesignwithfinegrainedinterpretability AT guancuntai selfrewarddesignwithfinegrainedinterpretability |