Cargando…
An Evaluation Methodology for Interactive Reinforcement Learning with Simulated Users
Interactive reinforcement learning methods utilise an external information source to evaluate decisions and accelerate learning. Previous work has shown that human advice could significantly improve learning agents’ performance. When evaluating reinforcement learning algorithms, it is common to repe...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7985787/ https://www.ncbi.nlm.nih.gov/pubmed/33572399 http://dx.doi.org/10.3390/biomimetics6010013 |
_version_ | 1783668322558017536 |
---|---|
author | Bignold, Adam Cruz, Francisco Dazeley, Richard Vamplew, Peter Foale, Cameron |
author_facet | Bignold, Adam Cruz, Francisco Dazeley, Richard Vamplew, Peter Foale, Cameron |
author_sort | Bignold, Adam |
collection | PubMed |
description | Interactive reinforcement learning methods utilise an external information source to evaluate decisions and accelerate learning. Previous work has shown that human advice could significantly improve learning agents’ performance. When evaluating reinforcement learning algorithms, it is common to repeat experiments as parameters are altered or to gain a sufficient sample size. In this regard, to require human interaction every time an experiment is restarted is undesirable, particularly when the expense in doing so can be considerable. Additionally, reusing the same people for the experiment introduces bias, as they will learn the behaviour of the agent and the dynamics of the environment. This paper presents a methodology for evaluating interactive reinforcement learning agents by employing simulated users. Simulated users allow human knowledge, bias, and interaction to be simulated. The use of simulated users allows the development and testing of reinforcement learning agents, and can provide indicative results of agent performance under defined human constraints. While simulated users are no replacement for actual humans, they do offer an affordable and fast alternative for evaluative assisted agents. We introduce a method for performing a preliminary evaluation utilising simulated users to show how performance changes depending on the type of user assisting the agent. Moreover, we describe how human interaction may be simulated, and present an experiment illustrating the applicability of simulating users in evaluating agent performance when assisted by different types of trainers. Experimental results show that the use of this methodology allows for greater insight into the performance of interactive reinforcement learning agents when advised by different users. The use of simulated users with varying characteristics allows for evaluation of the impact of those characteristics on the behaviour of the learning agent. |
format | Online Article Text |
id | pubmed-7985787 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-79857872021-03-24 An Evaluation Methodology for Interactive Reinforcement Learning with Simulated Users Bignold, Adam Cruz, Francisco Dazeley, Richard Vamplew, Peter Foale, Cameron Biomimetics (Basel) Article Interactive reinforcement learning methods utilise an external information source to evaluate decisions and accelerate learning. Previous work has shown that human advice could significantly improve learning agents’ performance. When evaluating reinforcement learning algorithms, it is common to repeat experiments as parameters are altered or to gain a sufficient sample size. In this regard, to require human interaction every time an experiment is restarted is undesirable, particularly when the expense in doing so can be considerable. Additionally, reusing the same people for the experiment introduces bias, as they will learn the behaviour of the agent and the dynamics of the environment. This paper presents a methodology for evaluating interactive reinforcement learning agents by employing simulated users. Simulated users allow human knowledge, bias, and interaction to be simulated. The use of simulated users allows the development and testing of reinforcement learning agents, and can provide indicative results of agent performance under defined human constraints. While simulated users are no replacement for actual humans, they do offer an affordable and fast alternative for evaluative assisted agents. We introduce a method for performing a preliminary evaluation utilising simulated users to show how performance changes depending on the type of user assisting the agent. Moreover, we describe how human interaction may be simulated, and present an experiment illustrating the applicability of simulating users in evaluating agent performance when assisted by different types of trainers. Experimental results show that the use of this methodology allows for greater insight into the performance of interactive reinforcement learning agents when advised by different users. The use of simulated users with varying characteristics allows for evaluation of the impact of those characteristics on the behaviour of the learning agent. MDPI 2021-02-09 /pmc/articles/PMC7985787/ /pubmed/33572399 http://dx.doi.org/10.3390/biomimetics6010013 Text en © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Bignold, Adam Cruz, Francisco Dazeley, Richard Vamplew, Peter Foale, Cameron An Evaluation Methodology for Interactive Reinforcement Learning with Simulated Users |
title | An Evaluation Methodology for Interactive Reinforcement Learning with Simulated Users |
title_full | An Evaluation Methodology for Interactive Reinforcement Learning with Simulated Users |
title_fullStr | An Evaluation Methodology for Interactive Reinforcement Learning with Simulated Users |
title_full_unstemmed | An Evaluation Methodology for Interactive Reinforcement Learning with Simulated Users |
title_short | An Evaluation Methodology for Interactive Reinforcement Learning with Simulated Users |
title_sort | evaluation methodology for interactive reinforcement learning with simulated users |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7985787/ https://www.ncbi.nlm.nih.gov/pubmed/33572399 http://dx.doi.org/10.3390/biomimetics6010013 |
work_keys_str_mv | AT bignoldadam anevaluationmethodologyforinteractivereinforcementlearningwithsimulatedusers AT cruzfrancisco anevaluationmethodologyforinteractivereinforcementlearningwithsimulatedusers AT dazeleyrichard anevaluationmethodologyforinteractivereinforcementlearningwithsimulatedusers AT vamplewpeter anevaluationmethodologyforinteractivereinforcementlearningwithsimulatedusers AT foalecameron anevaluationmethodologyforinteractivereinforcementlearningwithsimulatedusers AT bignoldadam evaluationmethodologyforinteractivereinforcementlearningwithsimulatedusers AT cruzfrancisco evaluationmethodologyforinteractivereinforcementlearningwithsimulatedusers AT dazeleyrichard evaluationmethodologyforinteractivereinforcementlearningwithsimulatedusers AT vamplewpeter evaluationmethodologyforinteractivereinforcementlearningwithsimulatedusers AT foalecameron evaluationmethodologyforinteractivereinforcementlearningwithsimulatedusers |