Cargando…
Indoor Passive Visual Positioning by CNN-Based Pedestrian Detection
Indoor positioning applications are developing at a rapid pace; active visual positioning is one method that is applicable to mobile platforms. Other methods include Wi-Fi, CSI, and PDR approaches; however, their positioning accuracy usually cannot achieve the positioning performance of the active v...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9501286/ https://www.ncbi.nlm.nih.gov/pubmed/36144036 http://dx.doi.org/10.3390/mi13091413 |
_version_ | 1784795436813385728 |
---|---|
author | Wu, Dewen Chen, Ruizhi Yu, Yue Zheng, Xingyu Xu, Yan Liu, Zuoya |
author_facet | Wu, Dewen Chen, Ruizhi Yu, Yue Zheng, Xingyu Xu, Yan Liu, Zuoya |
author_sort | Wu, Dewen |
collection | PubMed |
description | Indoor positioning applications are developing at a rapid pace; active visual positioning is one method that is applicable to mobile platforms. Other methods include Wi-Fi, CSI, and PDR approaches; however, their positioning accuracy usually cannot achieve the positioning performance of the active visual method. Active visual users, however, must take a photo to obtain location information, raising confidentiality and privacy issues. To address these concerns, we propose a solution for passive visual positioning based on pedestrian detection and projection transformation. This method consists of three steps: pretreatment, pedestrian detection, and pose estimation. Pretreatment includes camera calibration and camera installation. In pedestrian detection, features are extracted by deep convolutional neural networks using neighboring frame detection results and the map information as the region of interest attention model (RIAM). Pose estimation computes accurate localization results through projection transformation (PT). This system relies on security cameras installed in non-private areas so that pedestrians do not have to take photos. Experiments were conducted in a hall about 100 square meters in size, with 41 test-points for the localization experiment. The results show that the positioning error was 0.48 m (RMSE) and the 90% error was 0.73 m. Therefore, the proposed passive visual method delivers high positioning performance. |
format | Online Article Text |
id | pubmed-9501286 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-95012862022-09-24 Indoor Passive Visual Positioning by CNN-Based Pedestrian Detection Wu, Dewen Chen, Ruizhi Yu, Yue Zheng, Xingyu Xu, Yan Liu, Zuoya Micromachines (Basel) Article Indoor positioning applications are developing at a rapid pace; active visual positioning is one method that is applicable to mobile platforms. Other methods include Wi-Fi, CSI, and PDR approaches; however, their positioning accuracy usually cannot achieve the positioning performance of the active visual method. Active visual users, however, must take a photo to obtain location information, raising confidentiality and privacy issues. To address these concerns, we propose a solution for passive visual positioning based on pedestrian detection and projection transformation. This method consists of three steps: pretreatment, pedestrian detection, and pose estimation. Pretreatment includes camera calibration and camera installation. In pedestrian detection, features are extracted by deep convolutional neural networks using neighboring frame detection results and the map information as the region of interest attention model (RIAM). Pose estimation computes accurate localization results through projection transformation (PT). This system relies on security cameras installed in non-private areas so that pedestrians do not have to take photos. Experiments were conducted in a hall about 100 square meters in size, with 41 test-points for the localization experiment. The results show that the positioning error was 0.48 m (RMSE) and the 90% error was 0.73 m. Therefore, the proposed passive visual method delivers high positioning performance. MDPI 2022-08-27 /pmc/articles/PMC9501286/ /pubmed/36144036 http://dx.doi.org/10.3390/mi13091413 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Wu, Dewen Chen, Ruizhi Yu, Yue Zheng, Xingyu Xu, Yan Liu, Zuoya Indoor Passive Visual Positioning by CNN-Based Pedestrian Detection |
title | Indoor Passive Visual Positioning by CNN-Based Pedestrian Detection |
title_full | Indoor Passive Visual Positioning by CNN-Based Pedestrian Detection |
title_fullStr | Indoor Passive Visual Positioning by CNN-Based Pedestrian Detection |
title_full_unstemmed | Indoor Passive Visual Positioning by CNN-Based Pedestrian Detection |
title_short | Indoor Passive Visual Positioning by CNN-Based Pedestrian Detection |
title_sort | indoor passive visual positioning by cnn-based pedestrian detection |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9501286/ https://www.ncbi.nlm.nih.gov/pubmed/36144036 http://dx.doi.org/10.3390/mi13091413 |
work_keys_str_mv | AT wudewen indoorpassivevisualpositioningbycnnbasedpedestriandetection AT chenruizhi indoorpassivevisualpositioningbycnnbasedpedestriandetection AT yuyue indoorpassivevisualpositioningbycnnbasedpedestriandetection AT zhengxingyu indoorpassivevisualpositioningbycnnbasedpedestriandetection AT xuyan indoorpassivevisualpositioningbycnnbasedpedestriandetection AT liuzuoya indoorpassivevisualpositioningbycnnbasedpedestriandetection |