Cargando…
Research on None-Line-of-Sight/Line-of-Sight Identification Method Based on Convolutional Neural Network-Channel Attention Module
None-Line-of-Sight (NLOS) propagation of Ultra-Wideband (UWB) signals leads to a decrease in the reliability of positioning accuracy. Therefore, it is essential to identify the channel environment prior to localization to preserve the high-accuracy Line-of-Sight (LOS) ranging results and correct or...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10611321/ https://www.ncbi.nlm.nih.gov/pubmed/37896642 http://dx.doi.org/10.3390/s23208552 |
_version_ | 1785128463868362752 |
---|---|
author | Zhang, Jingjing Yi, Qingwu Huang, Lu Yang, Zihan Cheng, Jianqiang Zhang, Heng |
author_facet | Zhang, Jingjing Yi, Qingwu Huang, Lu Yang, Zihan Cheng, Jianqiang Zhang, Heng |
author_sort | Zhang, Jingjing |
collection | PubMed |
description | None-Line-of-Sight (NLOS) propagation of Ultra-Wideband (UWB) signals leads to a decrease in the reliability of positioning accuracy. Therefore, it is essential to identify the channel environment prior to localization to preserve the high-accuracy Line-of-Sight (LOS) ranging results and correct or reject the NLOS ranging results with positive bias. Aiming at the problem of the low accuracy and poor generalization ability of NLOS/LOS identification methods based on Channel Impulse Response (CIR) at present, the multilayer Convolutional Neural Networks (CNN) combined with Channel Attention Module (CAM) for NLOS/LOS identification method is proposed. Firstly, the CAM is embedded in the multilayer CNN to extract the time-domain data features of the original CIR. Then, the global average pooling layer is used to replace the fully connected layer for feature integration and classification output. In addition, the public dataset from the European Horizon 2020 Programme project eWINE is used to perform comparative experiments with different structural models and different identification methods. The results show that the proposed CNN-CAM model has a LOS recall of 92.29%, NLOS recall of 87.71%, accuracy of 90.00%, and F1-score of 90.22%. Compared with the current relatively advanced technology, it has better performance advantages. |
format | Online Article Text |
id | pubmed-10611321 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-106113212023-10-28 Research on None-Line-of-Sight/Line-of-Sight Identification Method Based on Convolutional Neural Network-Channel Attention Module Zhang, Jingjing Yi, Qingwu Huang, Lu Yang, Zihan Cheng, Jianqiang Zhang, Heng Sensors (Basel) Article None-Line-of-Sight (NLOS) propagation of Ultra-Wideband (UWB) signals leads to a decrease in the reliability of positioning accuracy. Therefore, it is essential to identify the channel environment prior to localization to preserve the high-accuracy Line-of-Sight (LOS) ranging results and correct or reject the NLOS ranging results with positive bias. Aiming at the problem of the low accuracy and poor generalization ability of NLOS/LOS identification methods based on Channel Impulse Response (CIR) at present, the multilayer Convolutional Neural Networks (CNN) combined with Channel Attention Module (CAM) for NLOS/LOS identification method is proposed. Firstly, the CAM is embedded in the multilayer CNN to extract the time-domain data features of the original CIR. Then, the global average pooling layer is used to replace the fully connected layer for feature integration and classification output. In addition, the public dataset from the European Horizon 2020 Programme project eWINE is used to perform comparative experiments with different structural models and different identification methods. The results show that the proposed CNN-CAM model has a LOS recall of 92.29%, NLOS recall of 87.71%, accuracy of 90.00%, and F1-score of 90.22%. Compared with the current relatively advanced technology, it has better performance advantages. MDPI 2023-10-18 /pmc/articles/PMC10611321/ /pubmed/37896642 http://dx.doi.org/10.3390/s23208552 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zhang, Jingjing Yi, Qingwu Huang, Lu Yang, Zihan Cheng, Jianqiang Zhang, Heng Research on None-Line-of-Sight/Line-of-Sight Identification Method Based on Convolutional Neural Network-Channel Attention Module |
title | Research on None-Line-of-Sight/Line-of-Sight Identification Method Based on Convolutional Neural Network-Channel Attention Module |
title_full | Research on None-Line-of-Sight/Line-of-Sight Identification Method Based on Convolutional Neural Network-Channel Attention Module |
title_fullStr | Research on None-Line-of-Sight/Line-of-Sight Identification Method Based on Convolutional Neural Network-Channel Attention Module |
title_full_unstemmed | Research on None-Line-of-Sight/Line-of-Sight Identification Method Based on Convolutional Neural Network-Channel Attention Module |
title_short | Research on None-Line-of-Sight/Line-of-Sight Identification Method Based on Convolutional Neural Network-Channel Attention Module |
title_sort | research on none-line-of-sight/line-of-sight identification method based on convolutional neural network-channel attention module |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10611321/ https://www.ncbi.nlm.nih.gov/pubmed/37896642 http://dx.doi.org/10.3390/s23208552 |
work_keys_str_mv | AT zhangjingjing researchonnonelineofsightlineofsightidentificationmethodbasedonconvolutionalneuralnetworkchannelattentionmodule AT yiqingwu researchonnonelineofsightlineofsightidentificationmethodbasedonconvolutionalneuralnetworkchannelattentionmodule AT huanglu researchonnonelineofsightlineofsightidentificationmethodbasedonconvolutionalneuralnetworkchannelattentionmodule AT yangzihan researchonnonelineofsightlineofsightidentificationmethodbasedonconvolutionalneuralnetworkchannelattentionmodule AT chengjianqiang researchonnonelineofsightlineofsightidentificationmethodbasedonconvolutionalneuralnetworkchannelattentionmodule AT zhangheng researchonnonelineofsightlineofsightidentificationmethodbasedonconvolutionalneuralnetworkchannelattentionmodule |