Cargando…

Intrinsically Motivated Exploration of Learned Goal Spaces

Finding algorithms that allow agents to discover a wide variety of skills efficiently and autonomously, remains a challenge of Artificial Intelligence. Intrinsically Motivated Goal Exploration Processes (IMGEPs) have been shown to enable real world robots to learn repertoires of policies producing a...

Descripción completa

Detalles Bibliográficos
Autores principales: Laversanne-Finot, Adrien, Péré, Alexandre, Oudeyer, Pierre-Yves
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7835425/
https://www.ncbi.nlm.nih.gov/pubmed/33510630
http://dx.doi.org/10.3389/fnbot.2020.555271
_version_ 1783642523775795200
author Laversanne-Finot, Adrien
Péré, Alexandre
Oudeyer, Pierre-Yves
author_facet Laversanne-Finot, Adrien
Péré, Alexandre
Oudeyer, Pierre-Yves
author_sort Laversanne-Finot, Adrien
collection PubMed
description Finding algorithms that allow agents to discover a wide variety of skills efficiently and autonomously, remains a challenge of Artificial Intelligence. Intrinsically Motivated Goal Exploration Processes (IMGEPs) have been shown to enable real world robots to learn repertoires of policies producing a wide range of diverse effects. They work by enabling agents to autonomously sample goals that they then try to achieve. In practice, this strategy leads to an efficient exploration of complex environments with high-dimensional continuous actions. Until recently, it was necessary to provide the agents with an engineered goal space containing relevant features of the environment. In this article we show that the goal space can be learned using deep representation learning algorithms, effectively reducing the burden of designing goal spaces. Our results pave the way to autonomous learning agents that are able to autonomously build a representation of the world and use this representation to explore the world efficiently. We present experiments in two environments using population-based IMGEPs. The first experiments are performed on a simple, yet challenging, simulated environment. Then, another set of experiments tests the applicability of those principles on a real-world robotic setup, where a 6-joint robotic arm learns to manipulate a ball inside an arena, by choosing goals in a space learned from its past experience.
format Online
Article
Text
id pubmed-7835425
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-78354252021-01-27 Intrinsically Motivated Exploration of Learned Goal Spaces Laversanne-Finot, Adrien Péré, Alexandre Oudeyer, Pierre-Yves Front Neurorobot Neuroscience Finding algorithms that allow agents to discover a wide variety of skills efficiently and autonomously, remains a challenge of Artificial Intelligence. Intrinsically Motivated Goal Exploration Processes (IMGEPs) have been shown to enable real world robots to learn repertoires of policies producing a wide range of diverse effects. They work by enabling agents to autonomously sample goals that they then try to achieve. In practice, this strategy leads to an efficient exploration of complex environments with high-dimensional continuous actions. Until recently, it was necessary to provide the agents with an engineered goal space containing relevant features of the environment. In this article we show that the goal space can be learned using deep representation learning algorithms, effectively reducing the burden of designing goal spaces. Our results pave the way to autonomous learning agents that are able to autonomously build a representation of the world and use this representation to explore the world efficiently. We present experiments in two environments using population-based IMGEPs. The first experiments are performed on a simple, yet challenging, simulated environment. Then, another set of experiments tests the applicability of those principles on a real-world robotic setup, where a 6-joint robotic arm learns to manipulate a ball inside an arena, by choosing goals in a space learned from its past experience. Frontiers Media S.A. 2021-01-12 /pmc/articles/PMC7835425/ /pubmed/33510630 http://dx.doi.org/10.3389/fnbot.2020.555271 Text en Copyright © 2021 Laversanne-Finot, Péré and Oudeyer. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Laversanne-Finot, Adrien
Péré, Alexandre
Oudeyer, Pierre-Yves
Intrinsically Motivated Exploration of Learned Goal Spaces
title Intrinsically Motivated Exploration of Learned Goal Spaces
title_full Intrinsically Motivated Exploration of Learned Goal Spaces
title_fullStr Intrinsically Motivated Exploration of Learned Goal Spaces
title_full_unstemmed Intrinsically Motivated Exploration of Learned Goal Spaces
title_short Intrinsically Motivated Exploration of Learned Goal Spaces
title_sort intrinsically motivated exploration of learned goal spaces
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7835425/
https://www.ncbi.nlm.nih.gov/pubmed/33510630
http://dx.doi.org/10.3389/fnbot.2020.555271
work_keys_str_mv AT laversannefinotadrien intrinsicallymotivatedexplorationoflearnedgoalspaces
AT perealexandre intrinsicallymotivatedexplorationoflearnedgoalspaces
AT oudeyerpierreyves intrinsicallymotivatedexplorationoflearnedgoalspaces