Cargando…
Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach
After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System) solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2017
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5751567/ https://www.ncbi.nlm.nih.gov/pubmed/29292761 http://dx.doi.org/10.3390/s17122847 |
_version_ | 1783289973545369600 |
---|---|
author | Liu, Mengyun Chen, Ruizhi Li, Deren Chen, Yujin Guo, Guangyi Cao, Zhipeng Pan, Yuanjin |
author_facet | Liu, Mengyun Chen, Ruizhi Li, Deren Chen, Yujin Guo, Guangyi Cao, Zhipeng Pan, Yuanjin |
author_sort | Liu, Mengyun |
collection | PubMed |
description | After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System) solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to “see” which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning) and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web server is developed for indoor scene model training and communication with an Android client. To evaluate the performance, comparison experiments are conducted and the results demonstrate that a positioning accuracy of 1.32 m at 95% is achievable with the proposed solution. Both positioning accuracy and robustness are enhanced compared to approaches without scene constraint including commercial products such as IndoorAtlas. |
format | Online Article Text |
id | pubmed-5751567 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2017 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-57515672018-01-10 Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach Liu, Mengyun Chen, Ruizhi Li, Deren Chen, Yujin Guo, Guangyi Cao, Zhipeng Pan, Yuanjin Sensors (Basel) Article After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System) solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to “see” which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning) and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web server is developed for indoor scene model training and communication with an Android client. To evaluate the performance, comparison experiments are conducted and the results demonstrate that a positioning accuracy of 1.32 m at 95% is achievable with the proposed solution. Both positioning accuracy and robustness are enhanced compared to approaches without scene constraint including commercial products such as IndoorAtlas. MDPI 2017-12-08 /pmc/articles/PMC5751567/ /pubmed/29292761 http://dx.doi.org/10.3390/s17122847 Text en © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Liu, Mengyun Chen, Ruizhi Li, Deren Chen, Yujin Guo, Guangyi Cao, Zhipeng Pan, Yuanjin Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach |
title | Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach |
title_full | Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach |
title_fullStr | Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach |
title_full_unstemmed | Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach |
title_short | Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach |
title_sort | scene recognition for indoor localization using a multi-sensor fusion approach |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5751567/ https://www.ncbi.nlm.nih.gov/pubmed/29292761 http://dx.doi.org/10.3390/s17122847 |
work_keys_str_mv | AT liumengyun scenerecognitionforindoorlocalizationusingamultisensorfusionapproach AT chenruizhi scenerecognitionforindoorlocalizationusingamultisensorfusionapproach AT lideren scenerecognitionforindoorlocalizationusingamultisensorfusionapproach AT chenyujin scenerecognitionforindoorlocalizationusingamultisensorfusionapproach AT guoguangyi scenerecognitionforindoorlocalizationusingamultisensorfusionapproach AT caozhipeng scenerecognitionforindoorlocalizationusingamultisensorfusionapproach AT panyuanjin scenerecognitionforindoorlocalizationusingamultisensorfusionapproach |