Cargando…

Vision-Based Robot Navigation through Combining Unsupervised Learning and Hierarchical Reinforcement Learning

Extensive studies have shown that many animals’ capability of forming spatial representations for self-localization, path planning, and navigation relies on the functionalities of place and head-direction (HD) cells in the hippocampus. Although there are numerous hippocampal modeling approaches, onl...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhou, Xiaomao, Bai, Tao, Gao, Yanbin, Han, Yuntao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6479296/
https://www.ncbi.nlm.nih.gov/pubmed/30939807
http://dx.doi.org/10.3390/s19071576
_version_ 1783413312847872000
author Zhou, Xiaomao
Bai, Tao
Gao, Yanbin
Han, Yuntao
author_facet Zhou, Xiaomao
Bai, Tao
Gao, Yanbin
Han, Yuntao
author_sort Zhou, Xiaomao
collection PubMed
description Extensive studies have shown that many animals’ capability of forming spatial representations for self-localization, path planning, and navigation relies on the functionalities of place and head-direction (HD) cells in the hippocampus. Although there are numerous hippocampal modeling approaches, only a few span the wide functionalities ranging from processing raw sensory signals to planning and action generation. This paper presents a vision-based navigation system that involves generating place and HD cells through learning from visual images, building topological maps based on learned cell representations and performing navigation using hierarchical reinforcement learning. First, place and HD cells are trained from sequences of visual stimuli in an unsupervised learning fashion. A modified Slow Feature Analysis (SFA) algorithm is proposed to learn different cell types in an intentional way by restricting their learning to separate phases of the spatial exploration. Then, to extract the encoded metric information from these unsupervised learning representations, a self-organized learning algorithm is adopted to learn over the emerged cell activities and to generate topological maps that reveal the topology of the environment and information about a robot’s head direction, respectively. This enables the robot to perform self-localization and orientation detection based on the generated maps. Finally, goal-directed navigation is performed using reinforcement learning in continuous state spaces which are represented by the population activities of place cells. In particular, considering that the topological map provides a natural hierarchical representation of the environment, hierarchical reinforcement learning (HRL) is used to exploit this hierarchy to accelerate learning. The HRL works on different spatial scales, where a high-level policy learns to select subgoals and a low-level policy learns over primitive actions to specialize on the selected subgoals. Experimental results demonstrate that our system is able to navigate a robot to the desired position effectively, and the HRL shows a much better learning performance than the standard RL in solving our navigation tasks.
format Online
Article
Text
id pubmed-6479296
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-64792962019-04-29 Vision-Based Robot Navigation through Combining Unsupervised Learning and Hierarchical Reinforcement Learning Zhou, Xiaomao Bai, Tao Gao, Yanbin Han, Yuntao Sensors (Basel) Article Extensive studies have shown that many animals’ capability of forming spatial representations for self-localization, path planning, and navigation relies on the functionalities of place and head-direction (HD) cells in the hippocampus. Although there are numerous hippocampal modeling approaches, only a few span the wide functionalities ranging from processing raw sensory signals to planning and action generation. This paper presents a vision-based navigation system that involves generating place and HD cells through learning from visual images, building topological maps based on learned cell representations and performing navigation using hierarchical reinforcement learning. First, place and HD cells are trained from sequences of visual stimuli in an unsupervised learning fashion. A modified Slow Feature Analysis (SFA) algorithm is proposed to learn different cell types in an intentional way by restricting their learning to separate phases of the spatial exploration. Then, to extract the encoded metric information from these unsupervised learning representations, a self-organized learning algorithm is adopted to learn over the emerged cell activities and to generate topological maps that reveal the topology of the environment and information about a robot’s head direction, respectively. This enables the robot to perform self-localization and orientation detection based on the generated maps. Finally, goal-directed navigation is performed using reinforcement learning in continuous state spaces which are represented by the population activities of place cells. In particular, considering that the topological map provides a natural hierarchical representation of the environment, hierarchical reinforcement learning (HRL) is used to exploit this hierarchy to accelerate learning. The HRL works on different spatial scales, where a high-level policy learns to select subgoals and a low-level policy learns over primitive actions to specialize on the selected subgoals. Experimental results demonstrate that our system is able to navigate a robot to the desired position effectively, and the HRL shows a much better learning performance than the standard RL in solving our navigation tasks. MDPI 2019-04-01 /pmc/articles/PMC6479296/ /pubmed/30939807 http://dx.doi.org/10.3390/s19071576 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Zhou, Xiaomao
Bai, Tao
Gao, Yanbin
Han, Yuntao
Vision-Based Robot Navigation through Combining Unsupervised Learning and Hierarchical Reinforcement Learning
title Vision-Based Robot Navigation through Combining Unsupervised Learning and Hierarchical Reinforcement Learning
title_full Vision-Based Robot Navigation through Combining Unsupervised Learning and Hierarchical Reinforcement Learning
title_fullStr Vision-Based Robot Navigation through Combining Unsupervised Learning and Hierarchical Reinforcement Learning
title_full_unstemmed Vision-Based Robot Navigation through Combining Unsupervised Learning and Hierarchical Reinforcement Learning
title_short Vision-Based Robot Navigation through Combining Unsupervised Learning and Hierarchical Reinforcement Learning
title_sort vision-based robot navigation through combining unsupervised learning and hierarchical reinforcement learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6479296/
https://www.ncbi.nlm.nih.gov/pubmed/30939807
http://dx.doi.org/10.3390/s19071576
work_keys_str_mv AT zhouxiaomao visionbasedrobotnavigationthroughcombiningunsupervisedlearningandhierarchicalreinforcementlearning
AT baitao visionbasedrobotnavigationthroughcombiningunsupervisedlearningandhierarchicalreinforcementlearning
AT gaoyanbin visionbasedrobotnavigationthroughcombiningunsupervisedlearningandhierarchicalreinforcementlearning
AT hanyuntao visionbasedrobotnavigationthroughcombiningunsupervisedlearningandhierarchicalreinforcementlearning