Cargando…
Multi-Objective Location and Mapping Based on Deep Learning and Visual Slam
Simultaneous localization and mapping (SLAM) technology can be used to locate and build maps in unknown environments, but the constructed maps often suffer from poor readability and interactivity, and the primary and secondary information in the map cannot be accurately grasped. For intelligent robo...
Autores principales: | , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9571389/ https://www.ncbi.nlm.nih.gov/pubmed/36236676 http://dx.doi.org/10.3390/s22197576 |
_version_ | 1784810352298426368 |
---|---|
author | Sun, Ying Hu, Jun Yun, Juntong Liu, Ying Bai, Dongxu Liu, Xin Zhao, Guojun Jiang, Guozhang Kong, Jianyi Chen, Baojia |
author_facet | Sun, Ying Hu, Jun Yun, Juntong Liu, Ying Bai, Dongxu Liu, Xin Zhao, Guojun Jiang, Guozhang Kong, Jianyi Chen, Baojia |
author_sort | Sun, Ying |
collection | PubMed |
description | Simultaneous localization and mapping (SLAM) technology can be used to locate and build maps in unknown environments, but the constructed maps often suffer from poor readability and interactivity, and the primary and secondary information in the map cannot be accurately grasped. For intelligent robots to interact in meaningful ways with their environment, they must understand both the geometric and semantic properties of the scene surrounding them. Our proposed method can not only reduce the absolute positional errors (APE) and improve the positioning performance of the system but also construct the object-oriented dense semantic point cloud map and output point cloud model of each object to reconstruct each object in the indoor scene. In fact, eight categories of objects are used for detection and semantic mapping using coco weights in our experiments, and most objects in the actual scene can be reconstructed in theory. Experiments show that the number of points in the point cloud is significantly reduced. The average positioning error of the eight categories of objects in Technical University of Munich (TUM) datasets is very small. The absolute positional error of the camera is also reduced with the introduction of semantic constraints, and the positioning performance of the system is improved. At the same time, our algorithm can segment the point cloud model of objects in the environment with high accuracy. |
format | Online Article Text |
id | pubmed-9571389 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-95713892022-10-17 Multi-Objective Location and Mapping Based on Deep Learning and Visual Slam Sun, Ying Hu, Jun Yun, Juntong Liu, Ying Bai, Dongxu Liu, Xin Zhao, Guojun Jiang, Guozhang Kong, Jianyi Chen, Baojia Sensors (Basel) Article Simultaneous localization and mapping (SLAM) technology can be used to locate and build maps in unknown environments, but the constructed maps often suffer from poor readability and interactivity, and the primary and secondary information in the map cannot be accurately grasped. For intelligent robots to interact in meaningful ways with their environment, they must understand both the geometric and semantic properties of the scene surrounding them. Our proposed method can not only reduce the absolute positional errors (APE) and improve the positioning performance of the system but also construct the object-oriented dense semantic point cloud map and output point cloud model of each object to reconstruct each object in the indoor scene. In fact, eight categories of objects are used for detection and semantic mapping using coco weights in our experiments, and most objects in the actual scene can be reconstructed in theory. Experiments show that the number of points in the point cloud is significantly reduced. The average positioning error of the eight categories of objects in Technical University of Munich (TUM) datasets is very small. The absolute positional error of the camera is also reduced with the introduction of semantic constraints, and the positioning performance of the system is improved. At the same time, our algorithm can segment the point cloud model of objects in the environment with high accuracy. MDPI 2022-10-06 /pmc/articles/PMC9571389/ /pubmed/36236676 http://dx.doi.org/10.3390/s22197576 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Sun, Ying Hu, Jun Yun, Juntong Liu, Ying Bai, Dongxu Liu, Xin Zhao, Guojun Jiang, Guozhang Kong, Jianyi Chen, Baojia Multi-Objective Location and Mapping Based on Deep Learning and Visual Slam |
title | Multi-Objective Location and Mapping Based on Deep Learning and Visual Slam |
title_full | Multi-Objective Location and Mapping Based on Deep Learning and Visual Slam |
title_fullStr | Multi-Objective Location and Mapping Based on Deep Learning and Visual Slam |
title_full_unstemmed | Multi-Objective Location and Mapping Based on Deep Learning and Visual Slam |
title_short | Multi-Objective Location and Mapping Based on Deep Learning and Visual Slam |
title_sort | multi-objective location and mapping based on deep learning and visual slam |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9571389/ https://www.ncbi.nlm.nih.gov/pubmed/36236676 http://dx.doi.org/10.3390/s22197576 |
work_keys_str_mv | AT sunying multiobjectivelocationandmappingbasedondeeplearningandvisualslam AT hujun multiobjectivelocationandmappingbasedondeeplearningandvisualslam AT yunjuntong multiobjectivelocationandmappingbasedondeeplearningandvisualslam AT liuying multiobjectivelocationandmappingbasedondeeplearningandvisualslam AT baidongxu multiobjectivelocationandmappingbasedondeeplearningandvisualslam AT liuxin multiobjectivelocationandmappingbasedondeeplearningandvisualslam AT zhaoguojun multiobjectivelocationandmappingbasedondeeplearningandvisualslam AT jiangguozhang multiobjectivelocationandmappingbasedondeeplearningandvisualslam AT kongjianyi multiobjectivelocationandmappingbasedondeeplearningandvisualslam AT chenbaojia multiobjectivelocationandmappingbasedondeeplearningandvisualslam |