Cargando…

Open-Ended Learning: A Conceptual Framework Based on Representational Redescription

Reinforcement learning (RL) aims at building a policy that maximizes a task-related reward within a given domain. When the domain is known, i.e., when its states, actions and reward are defined, Markov Decision Processes (MDPs) provide a convenient theoretical framework to formalize RL. But in an op...

Descripción completa

Detalles Bibliográficos
Autores principales: Doncieux, Stephane, Filliat, David, Díaz-Rodríguez, Natalia, Hospedales, Timothy, Duro, Richard, Coninx, Alexandre, Roijers, Diederik M., Girard, Benoît, Perrin, Nicolas, Sigaud, Olivier
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6167466/
https://www.ncbi.nlm.nih.gov/pubmed/30319388
http://dx.doi.org/10.3389/fnbot.2018.00059
_version_ 1783360204299042816
author Doncieux, Stephane
Filliat, David
Díaz-Rodríguez, Natalia
Hospedales, Timothy
Duro, Richard
Coninx, Alexandre
Roijers, Diederik M.
Girard, Benoît
Perrin, Nicolas
Sigaud, Olivier
author_facet Doncieux, Stephane
Filliat, David
Díaz-Rodríguez, Natalia
Hospedales, Timothy
Duro, Richard
Coninx, Alexandre
Roijers, Diederik M.
Girard, Benoît
Perrin, Nicolas
Sigaud, Olivier
author_sort Doncieux, Stephane
collection PubMed
description Reinforcement learning (RL) aims at building a policy that maximizes a task-related reward within a given domain. When the domain is known, i.e., when its states, actions and reward are defined, Markov Decision Processes (MDPs) provide a convenient theoretical framework to formalize RL. But in an open-ended learning process, an agent or robot must solve an unbounded sequence of tasks that are not known in advance and the corresponding MDPs cannot be built at design time. This defines the main challenges of open-ended learning: how can the agent learn how to behave appropriately when the adequate states, actions and rewards representations are not given? In this paper, we propose a conceptual framework to address this question. We assume an agent endowed with low-level perception and action capabilities. This agent receives an external reward when it faces a task. It must discover the state and action representations that will let it cast the tasks as MDPs in order to solve them by RL. The relevance of the action or state representation is critical for the agent to learn efficiently. Considering that the agent starts with a low level, task-agnostic state and action spaces based on its low-level perception and action capabilities, we describe open-ended learning as the challenge of building the adequate representation of states and actions, i.e., of redescribing available representations. We suggest an iterative approach to this problem based on several successive Representational Redescription processes, and highlight the corresponding challenges in which intrinsic motivations play a key role.
format Online
Article
Text
id pubmed-6167466
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-61674662018-10-12 Open-Ended Learning: A Conceptual Framework Based on Representational Redescription Doncieux, Stephane Filliat, David Díaz-Rodríguez, Natalia Hospedales, Timothy Duro, Richard Coninx, Alexandre Roijers, Diederik M. Girard, Benoît Perrin, Nicolas Sigaud, Olivier Front Neurorobot Neuroscience Reinforcement learning (RL) aims at building a policy that maximizes a task-related reward within a given domain. When the domain is known, i.e., when its states, actions and reward are defined, Markov Decision Processes (MDPs) provide a convenient theoretical framework to formalize RL. But in an open-ended learning process, an agent or robot must solve an unbounded sequence of tasks that are not known in advance and the corresponding MDPs cannot be built at design time. This defines the main challenges of open-ended learning: how can the agent learn how to behave appropriately when the adequate states, actions and rewards representations are not given? In this paper, we propose a conceptual framework to address this question. We assume an agent endowed with low-level perception and action capabilities. This agent receives an external reward when it faces a task. It must discover the state and action representations that will let it cast the tasks as MDPs in order to solve them by RL. The relevance of the action or state representation is critical for the agent to learn efficiently. Considering that the agent starts with a low level, task-agnostic state and action spaces based on its low-level perception and action capabilities, we describe open-ended learning as the challenge of building the adequate representation of states and actions, i.e., of redescribing available representations. We suggest an iterative approach to this problem based on several successive Representational Redescription processes, and highlight the corresponding challenges in which intrinsic motivations play a key role. Frontiers Media S.A. 2018-09-25 /pmc/articles/PMC6167466/ /pubmed/30319388 http://dx.doi.org/10.3389/fnbot.2018.00059 Text en Copyright © 2018 Doncieux, Filliat, Díaz-Rodríguez, Hospedales, Duro, Coninx, Roijers, Girard, Perrin and Sigaud. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Doncieux, Stephane
Filliat, David
Díaz-Rodríguez, Natalia
Hospedales, Timothy
Duro, Richard
Coninx, Alexandre
Roijers, Diederik M.
Girard, Benoît
Perrin, Nicolas
Sigaud, Olivier
Open-Ended Learning: A Conceptual Framework Based on Representational Redescription
title Open-Ended Learning: A Conceptual Framework Based on Representational Redescription
title_full Open-Ended Learning: A Conceptual Framework Based on Representational Redescription
title_fullStr Open-Ended Learning: A Conceptual Framework Based on Representational Redescription
title_full_unstemmed Open-Ended Learning: A Conceptual Framework Based on Representational Redescription
title_short Open-Ended Learning: A Conceptual Framework Based on Representational Redescription
title_sort open-ended learning: a conceptual framework based on representational redescription
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6167466/
https://www.ncbi.nlm.nih.gov/pubmed/30319388
http://dx.doi.org/10.3389/fnbot.2018.00059
work_keys_str_mv AT doncieuxstephane openendedlearningaconceptualframeworkbasedonrepresentationalredescription
AT filliatdavid openendedlearningaconceptualframeworkbasedonrepresentationalredescription
AT diazrodrigueznatalia openendedlearningaconceptualframeworkbasedonrepresentationalredescription
AT hospedalestimothy openendedlearningaconceptualframeworkbasedonrepresentationalredescription
AT durorichard openendedlearningaconceptualframeworkbasedonrepresentationalredescription
AT coninxalexandre openendedlearningaconceptualframeworkbasedonrepresentationalredescription
AT roijersdiederikm openendedlearningaconceptualframeworkbasedonrepresentationalredescription
AT girardbenoit openendedlearningaconceptualframeworkbasedonrepresentationalredescription
AT perrinnicolas openendedlearningaconceptualframeworkbasedonrepresentationalredescription
AT sigaudolivier openendedlearningaconceptualframeworkbasedonrepresentationalredescription