Cargando…
Scene perception based visual navigation of mobile robot in indoor environment
Only vision-based navigation is the key of cost reduction and widespread application of indoor mobile robot. Consider the unpredictable nature of artificial environments, deep learning techniques can be used to perform navigation with its strong ability to abstract image features. In this paper, we...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
ISA. Published by Elsevier Ltd.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7550175/ https://www.ncbi.nlm.nih.gov/pubmed/33069374 http://dx.doi.org/10.1016/j.isatra.2020.10.023 |
_version_ | 1783592920860852224 |
---|---|
author | Ran, T. Yuan, L. Zhang, J.b. |
author_facet | Ran, T. Yuan, L. Zhang, J.b. |
author_sort | Ran, T. |
collection | PubMed |
description | Only vision-based navigation is the key of cost reduction and widespread application of indoor mobile robot. Consider the unpredictable nature of artificial environments, deep learning techniques can be used to perform navigation with its strong ability to abstract image features. In this paper, we proposed a low-cost way of only vision-based perception to realize indoor mobile robot navigation, converting the problem of visual navigation to scene classification. Existing related research based on deep scene classification network has lower accuracy and brings more computational burden. Additionally, the navigation system has not yet been fully assessed in the previous work. Therefore, we designed a shallow convolutional neural network (CNN) with higher scene classification accuracy and efficiency to process images captured by a monocular camera. Besides, we proposed an adaptive weighted control (AWC) algorithm and combined with regular control (RC) to improve the robot’s motion performance. We demonstrated the capability and robustness of the proposed navigation method by performing extensive experiments in both static and dynamic unknown environments. The qualitative and quantitative results showed that the system performs better compared to previous related work in unknown environments. |
format | Online Article Text |
id | pubmed-7550175 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | ISA. Published by Elsevier Ltd. |
record_format | MEDLINE/PubMed |
spelling | pubmed-75501752020-10-13 Scene perception based visual navigation of mobile robot in indoor environment Ran, T. Yuan, L. Zhang, J.b. ISA Trans Practice Article Only vision-based navigation is the key of cost reduction and widespread application of indoor mobile robot. Consider the unpredictable nature of artificial environments, deep learning techniques can be used to perform navigation with its strong ability to abstract image features. In this paper, we proposed a low-cost way of only vision-based perception to realize indoor mobile robot navigation, converting the problem of visual navigation to scene classification. Existing related research based on deep scene classification network has lower accuracy and brings more computational burden. Additionally, the navigation system has not yet been fully assessed in the previous work. Therefore, we designed a shallow convolutional neural network (CNN) with higher scene classification accuracy and efficiency to process images captured by a monocular camera. Besides, we proposed an adaptive weighted control (AWC) algorithm and combined with regular control (RC) to improve the robot’s motion performance. We demonstrated the capability and robustness of the proposed navigation method by performing extensive experiments in both static and dynamic unknown environments. The qualitative and quantitative results showed that the system performs better compared to previous related work in unknown environments. ISA. Published by Elsevier Ltd. 2021-03 2020-10-12 /pmc/articles/PMC7550175/ /pubmed/33069374 http://dx.doi.org/10.1016/j.isatra.2020.10.023 Text en © 2020 ISA. Published by Elsevier Ltd. All rights reserved. Since January 2020 Elsevier has created a COVID-19 resource centre with free information in English and Mandarin on the novel coronavirus COVID-19. The COVID-19 resource centre is hosted on Elsevier Connect, the company's public news and information website. Elsevier hereby grants permission to make all its COVID-19-related research that is available on the COVID-19 resource centre - including this research content - immediately available in PubMed Central and other publicly funded repositories, such as the WHO COVID database with rights for unrestricted research re-use and analyses in any form or by any means with acknowledgement of the original source. These permissions are granted for free by Elsevier for as long as the COVID-19 resource centre remains active. |
spellingShingle | Practice Article Ran, T. Yuan, L. Zhang, J.b. Scene perception based visual navigation of mobile robot in indoor environment |
title | Scene perception based visual navigation of mobile robot in indoor environment |
title_full | Scene perception based visual navigation of mobile robot in indoor environment |
title_fullStr | Scene perception based visual navigation of mobile robot in indoor environment |
title_full_unstemmed | Scene perception based visual navigation of mobile robot in indoor environment |
title_short | Scene perception based visual navigation of mobile robot in indoor environment |
title_sort | scene perception based visual navigation of mobile robot in indoor environment |
topic | Practice Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7550175/ https://www.ncbi.nlm.nih.gov/pubmed/33069374 http://dx.doi.org/10.1016/j.isatra.2020.10.023 |
work_keys_str_mv | AT rant sceneperceptionbasedvisualnavigationofmobilerobotinindoorenvironment AT yuanl sceneperceptionbasedvisualnavigationofmobilerobotinindoorenvironment AT zhangjb sceneperceptionbasedvisualnavigationofmobilerobotinindoorenvironment |