Cargando…

An Improved Deep Residual Network-Based Semantic Simultaneous Localization and Mapping Method for Monocular Vision Robot

The robot simultaneous localization and mapping (SLAM) is a very important and useful technology in the robotic field. However, the environmental map constructed by the traditional visual SLAM method contains little semantic information, which cannot satisfy the needs of complex applications. The se...

Descripción completa

Detalles Bibliográficos
Autores principales: Ni, Jianjun, Gong, Tao, Gu, Yafei, Zhu, Jinxiu, Fan, Xinnan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7035522/
https://www.ncbi.nlm.nih.gov/pubmed/32104171
http://dx.doi.org/10.1155/2020/7490840
_version_ 1783500075198054400
author Ni, Jianjun
Gong, Tao
Gu, Yafei
Zhu, Jinxiu
Fan, Xinnan
author_facet Ni, Jianjun
Gong, Tao
Gu, Yafei
Zhu, Jinxiu
Fan, Xinnan
author_sort Ni, Jianjun
collection PubMed
description The robot simultaneous localization and mapping (SLAM) is a very important and useful technology in the robotic field. However, the environmental map constructed by the traditional visual SLAM method contains little semantic information, which cannot satisfy the needs of complex applications. The semantic map can deal with this problem efficiently, which has become a research hot spot. This paper proposed an improved deep residual network- (ResNet-) based semantic SLAM method for monocular vision robots. In the proposed approach, an improved image matching algorithm based on feature points is presented, to enhance the anti-interference ability of the algorithm. Then, the robust feature point extraction method is adopted in the front-end module of the SLAM system, which can effectively reduce the probability of camera tracking loss. In addition, the improved key frame insertion method is introduced in the visual SLAM system to enhance the stability of the system during the turning and moving of the robot. Furthermore, an improved ResNet model is proposed to extract the semantic information of the environment to complete the construction of the semantic map of the environment. Finally, various experiments are conducted and the results show that the proposed method is effective.
format Online
Article
Text
id pubmed-7035522
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Hindawi
record_format MEDLINE/PubMed
spelling pubmed-70355222020-02-26 An Improved Deep Residual Network-Based Semantic Simultaneous Localization and Mapping Method for Monocular Vision Robot Ni, Jianjun Gong, Tao Gu, Yafei Zhu, Jinxiu Fan, Xinnan Comput Intell Neurosci Research Article The robot simultaneous localization and mapping (SLAM) is a very important and useful technology in the robotic field. However, the environmental map constructed by the traditional visual SLAM method contains little semantic information, which cannot satisfy the needs of complex applications. The semantic map can deal with this problem efficiently, which has become a research hot spot. This paper proposed an improved deep residual network- (ResNet-) based semantic SLAM method for monocular vision robots. In the proposed approach, an improved image matching algorithm based on feature points is presented, to enhance the anti-interference ability of the algorithm. Then, the robust feature point extraction method is adopted in the front-end module of the SLAM system, which can effectively reduce the probability of camera tracking loss. In addition, the improved key frame insertion method is introduced in the visual SLAM system to enhance the stability of the system during the turning and moving of the robot. Furthermore, an improved ResNet model is proposed to extract the semantic information of the environment to complete the construction of the semantic map of the environment. Finally, various experiments are conducted and the results show that the proposed method is effective. Hindawi 2020-02-10 /pmc/articles/PMC7035522/ /pubmed/32104171 http://dx.doi.org/10.1155/2020/7490840 Text en Copyright © 2020 Jianjun Ni et al. http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Ni, Jianjun
Gong, Tao
Gu, Yafei
Zhu, Jinxiu
Fan, Xinnan
An Improved Deep Residual Network-Based Semantic Simultaneous Localization and Mapping Method for Monocular Vision Robot
title An Improved Deep Residual Network-Based Semantic Simultaneous Localization and Mapping Method for Monocular Vision Robot
title_full An Improved Deep Residual Network-Based Semantic Simultaneous Localization and Mapping Method for Monocular Vision Robot
title_fullStr An Improved Deep Residual Network-Based Semantic Simultaneous Localization and Mapping Method for Monocular Vision Robot
title_full_unstemmed An Improved Deep Residual Network-Based Semantic Simultaneous Localization and Mapping Method for Monocular Vision Robot
title_short An Improved Deep Residual Network-Based Semantic Simultaneous Localization and Mapping Method for Monocular Vision Robot
title_sort improved deep residual network-based semantic simultaneous localization and mapping method for monocular vision robot
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7035522/
https://www.ncbi.nlm.nih.gov/pubmed/32104171
http://dx.doi.org/10.1155/2020/7490840
work_keys_str_mv AT nijianjun animproveddeepresidualnetworkbasedsemanticsimultaneouslocalizationandmappingmethodformonocularvisionrobot
AT gongtao animproveddeepresidualnetworkbasedsemanticsimultaneouslocalizationandmappingmethodformonocularvisionrobot
AT guyafei animproveddeepresidualnetworkbasedsemanticsimultaneouslocalizationandmappingmethodformonocularvisionrobot
AT zhujinxiu animproveddeepresidualnetworkbasedsemanticsimultaneouslocalizationandmappingmethodformonocularvisionrobot
AT fanxinnan animproveddeepresidualnetworkbasedsemanticsimultaneouslocalizationandmappingmethodformonocularvisionrobot
AT nijianjun improveddeepresidualnetworkbasedsemanticsimultaneouslocalizationandmappingmethodformonocularvisionrobot
AT gongtao improveddeepresidualnetworkbasedsemanticsimultaneouslocalizationandmappingmethodformonocularvisionrobot
AT guyafei improveddeepresidualnetworkbasedsemanticsimultaneouslocalizationandmappingmethodformonocularvisionrobot
AT zhujinxiu improveddeepresidualnetworkbasedsemanticsimultaneouslocalizationandmappingmethodformonocularvisionrobot
AT fanxinnan improveddeepresidualnetworkbasedsemanticsimultaneouslocalizationandmappingmethodformonocularvisionrobot