Cargando…

Learn to Steer through Deep Reinforcement Learning

It is crucial for robots to autonomously steer in complex environments safely without colliding with any obstacles. Compared to conventional methods, deep reinforcement learning-based methods are able to learn from past experiences automatically and enhance the generalization capability to cope with...

Descripción completa

Detalles Bibliográficos
Autores principales: Wu, Keyu, Abolfazli Esfahani, Mahdi, Yuan, Shenghai, Wang, Han
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6263476/
https://www.ncbi.nlm.nih.gov/pubmed/30373261
http://dx.doi.org/10.3390/s18113650
_version_ 1783375302170247168
author Wu, Keyu
Abolfazli Esfahani, Mahdi
Yuan, Shenghai
Wang, Han
author_facet Wu, Keyu
Abolfazli Esfahani, Mahdi
Yuan, Shenghai
Wang, Han
author_sort Wu, Keyu
collection PubMed
description It is crucial for robots to autonomously steer in complex environments safely without colliding with any obstacles. Compared to conventional methods, deep reinforcement learning-based methods are able to learn from past experiences automatically and enhance the generalization capability to cope with unseen circumstances. Therefore, we propose an end-to-end deep reinforcement learning algorithm in this paper to improve the performance of autonomous steering in complex environments. By embedding a branching noisy dueling architecture, the proposed model is capable of deriving steering commands directly from raw depth images with high efficiency. Specifically, our learning-based approach extracts the feature representation from depth inputs through convolutional neural networks and maps it to both linear and angular velocity commands simultaneously through different streams of the network. Moreover, the training framework is also meticulously designed to improve the learning efficiency and effectiveness. It is worth noting that the developed system is readily transferable from virtual training scenarios to real-world deployment without any fine-tuning by utilizing depth images. The proposed method is evaluated and compared with a series of baseline methods in various virtual environments. Experimental results demonstrate the superiority of the proposed model in terms of average reward, learning efficiency, success rate as well as computational time. Moreover, a variety of real-world experiments are also conducted which reveal the high adaptability of our model to both static and dynamic obstacle-cluttered environments.
format Online
Article
Text
id pubmed-6263476
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-62634762018-12-12 Learn to Steer through Deep Reinforcement Learning Wu, Keyu Abolfazli Esfahani, Mahdi Yuan, Shenghai Wang, Han Sensors (Basel) Article It is crucial for robots to autonomously steer in complex environments safely without colliding with any obstacles. Compared to conventional methods, deep reinforcement learning-based methods are able to learn from past experiences automatically and enhance the generalization capability to cope with unseen circumstances. Therefore, we propose an end-to-end deep reinforcement learning algorithm in this paper to improve the performance of autonomous steering in complex environments. By embedding a branching noisy dueling architecture, the proposed model is capable of deriving steering commands directly from raw depth images with high efficiency. Specifically, our learning-based approach extracts the feature representation from depth inputs through convolutional neural networks and maps it to both linear and angular velocity commands simultaneously through different streams of the network. Moreover, the training framework is also meticulously designed to improve the learning efficiency and effectiveness. It is worth noting that the developed system is readily transferable from virtual training scenarios to real-world deployment without any fine-tuning by utilizing depth images. The proposed method is evaluated and compared with a series of baseline methods in various virtual environments. Experimental results demonstrate the superiority of the proposed model in terms of average reward, learning efficiency, success rate as well as computational time. Moreover, a variety of real-world experiments are also conducted which reveal the high adaptability of our model to both static and dynamic obstacle-cluttered environments. MDPI 2018-10-27 /pmc/articles/PMC6263476/ /pubmed/30373261 http://dx.doi.org/10.3390/s18113650 Text en © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Wu, Keyu
Abolfazli Esfahani, Mahdi
Yuan, Shenghai
Wang, Han
Learn to Steer through Deep Reinforcement Learning
title Learn to Steer through Deep Reinforcement Learning
title_full Learn to Steer through Deep Reinforcement Learning
title_fullStr Learn to Steer through Deep Reinforcement Learning
title_full_unstemmed Learn to Steer through Deep Reinforcement Learning
title_short Learn to Steer through Deep Reinforcement Learning
title_sort learn to steer through deep reinforcement learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6263476/
https://www.ncbi.nlm.nih.gov/pubmed/30373261
http://dx.doi.org/10.3390/s18113650
work_keys_str_mv AT wukeyu learntosteerthroughdeepreinforcementlearning
AT abolfazliesfahanimahdi learntosteerthroughdeepreinforcementlearning
AT yuanshenghai learntosteerthroughdeepreinforcementlearning
AT wanghan learntosteerthroughdeepreinforcementlearning