Cargando…

Multi-Floor Indoor Localization Based on Multi-Modal Sensors

High-precision indoor localization is growing extremely quickly, especially for multi-floor scenarios. The data on existing indoor positioning schemes, mainly, come from wireless, visual, or lidar means, which are limited to a single sensor. With the massive deployment of WiFi access points and low-...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhou, Guangbing, Xu, Shugong, Zhang, Shunqing, Wang, Yu, Xiang, Chenlu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9185272/
https://www.ncbi.nlm.nih.gov/pubmed/35684784
http://dx.doi.org/10.3390/s22114162
_version_ 1784724683382325248
author Zhou, Guangbing
Xu, Shugong
Zhang, Shunqing
Wang, Yu
Xiang, Chenlu
author_facet Zhou, Guangbing
Xu, Shugong
Zhang, Shunqing
Wang, Yu
Xiang, Chenlu
author_sort Zhou, Guangbing
collection PubMed
description High-precision indoor localization is growing extremely quickly, especially for multi-floor scenarios. The data on existing indoor positioning schemes, mainly, come from wireless, visual, or lidar means, which are limited to a single sensor. With the massive deployment of WiFi access points and low-cost cameras, it is possible to combine the above three methods to achieve more accurate, complete, and reliable location results. However, the existing SLAM rapidly advances, so hybrid visual and wireless approaches take advantage of this, in a straightforward manner, without exploring their interactions. In this paper, a high-precision multi-floor indoor positioning method, based on vision, wireless signal characteristics, and lidar is proposed. In the joint scheme, we, first, use the positioning data output in lidar SLAM as the theoretical reference position for visual images; then, use a WiFi signal to estimate the rough area, with likelihood probability; and, finally, use the visual image to fine-tune the floor-estimation and location results. Based on the numerical results, we show that the proposed joint localization scheme can achieve 0.62 m of 3D localization accuracy, on average, and a 1.24-m MSE for two-dimensional tracking trajectories, with an estimation accuracy for the floor equal to 89.22%. Meanwhile, the localization process takes less than 0.25 s, which is of great importance for practical implementation.
format Online
Article
Text
id pubmed-9185272
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-91852722022-06-11 Multi-Floor Indoor Localization Based on Multi-Modal Sensors Zhou, Guangbing Xu, Shugong Zhang, Shunqing Wang, Yu Xiang, Chenlu Sensors (Basel) Article High-precision indoor localization is growing extremely quickly, especially for multi-floor scenarios. The data on existing indoor positioning schemes, mainly, come from wireless, visual, or lidar means, which are limited to a single sensor. With the massive deployment of WiFi access points and low-cost cameras, it is possible to combine the above three methods to achieve more accurate, complete, and reliable location results. However, the existing SLAM rapidly advances, so hybrid visual and wireless approaches take advantage of this, in a straightforward manner, without exploring their interactions. In this paper, a high-precision multi-floor indoor positioning method, based on vision, wireless signal characteristics, and lidar is proposed. In the joint scheme, we, first, use the positioning data output in lidar SLAM as the theoretical reference position for visual images; then, use a WiFi signal to estimate the rough area, with likelihood probability; and, finally, use the visual image to fine-tune the floor-estimation and location results. Based on the numerical results, we show that the proposed joint localization scheme can achieve 0.62 m of 3D localization accuracy, on average, and a 1.24-m MSE for two-dimensional tracking trajectories, with an estimation accuracy for the floor equal to 89.22%. Meanwhile, the localization process takes less than 0.25 s, which is of great importance for practical implementation. MDPI 2022-05-30 /pmc/articles/PMC9185272/ /pubmed/35684784 http://dx.doi.org/10.3390/s22114162 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Zhou, Guangbing
Xu, Shugong
Zhang, Shunqing
Wang, Yu
Xiang, Chenlu
Multi-Floor Indoor Localization Based on Multi-Modal Sensors
title Multi-Floor Indoor Localization Based on Multi-Modal Sensors
title_full Multi-Floor Indoor Localization Based on Multi-Modal Sensors
title_fullStr Multi-Floor Indoor Localization Based on Multi-Modal Sensors
title_full_unstemmed Multi-Floor Indoor Localization Based on Multi-Modal Sensors
title_short Multi-Floor Indoor Localization Based on Multi-Modal Sensors
title_sort multi-floor indoor localization based on multi-modal sensors
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9185272/
https://www.ncbi.nlm.nih.gov/pubmed/35684784
http://dx.doi.org/10.3390/s22114162
work_keys_str_mv AT zhouguangbing multifloorindoorlocalizationbasedonmultimodalsensors
AT xushugong multifloorindoorlocalizationbasedonmultimodalsensors
AT zhangshunqing multifloorindoorlocalizationbasedonmultimodalsensors
AT wangyu multifloorindoorlocalizationbasedonmultimodalsensors
AT xiangchenlu multifloorindoorlocalizationbasedonmultimodalsensors