Cargando…

IDAF: Iterative Dual-Scale Attentional Fusion Network for Automatic Modulation Recognition

Recently, deep learning models have been widely applied to modulation recognition, and they have become a hot topic due to their excellent end-to-end learning capabilities. However, current methods are mostly based on uni-modal inputs, which suffer from incomplete information and local optimization....

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Bohan, Ge, Ruixing, Zhu, Yuxuan, Zhang, Bolin, Zhang, Xiaokai, Bao, Yanfei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10575420/
https://www.ncbi.nlm.nih.gov/pubmed/37836964
http://dx.doi.org/10.3390/s23198134
_version_ 1785120917811101696
author Liu, Bohan
Ge, Ruixing
Zhu, Yuxuan
Zhang, Bolin
Zhang, Xiaokai
Bao, Yanfei
author_facet Liu, Bohan
Ge, Ruixing
Zhu, Yuxuan
Zhang, Bolin
Zhang, Xiaokai
Bao, Yanfei
author_sort Liu, Bohan
collection PubMed
description Recently, deep learning models have been widely applied to modulation recognition, and they have become a hot topic due to their excellent end-to-end learning capabilities. However, current methods are mostly based on uni-modal inputs, which suffer from incomplete information and local optimization. To complement the advantages of different modalities, we focus on the multi-modal fusion method. Therefore, we introduce an iterative dual-scale attentional fusion (iDAF) method to integrate multimodal data. Firstly, two feature maps with different receptive field sizes are constructed using local and global embedding layers. Secondly, the feature inputs are iterated into the iterative dual-channel attention module (iDCAM), where the two branches capture the details of high-level features and the global weights of each modal channel, respectively. The iDAF not only extracts the recognition characteristics of each of the specific domains, but also complements the strengths of different modalities to obtain a fruitful view. Our iDAF achieves a recognition accuracy of 93.5% at 10 dB and 0.6232 at full signal-to-noise ratio (SNR). The comparative experiments and ablation studies effectively demonstrate the effectiveness and superiority of the iDAF.
format Online
Article
Text
id pubmed-10575420
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-105754202023-10-14 IDAF: Iterative Dual-Scale Attentional Fusion Network for Automatic Modulation Recognition Liu, Bohan Ge, Ruixing Zhu, Yuxuan Zhang, Bolin Zhang, Xiaokai Bao, Yanfei Sensors (Basel) Article Recently, deep learning models have been widely applied to modulation recognition, and they have become a hot topic due to their excellent end-to-end learning capabilities. However, current methods are mostly based on uni-modal inputs, which suffer from incomplete information and local optimization. To complement the advantages of different modalities, we focus on the multi-modal fusion method. Therefore, we introduce an iterative dual-scale attentional fusion (iDAF) method to integrate multimodal data. Firstly, two feature maps with different receptive field sizes are constructed using local and global embedding layers. Secondly, the feature inputs are iterated into the iterative dual-channel attention module (iDCAM), where the two branches capture the details of high-level features and the global weights of each modal channel, respectively. The iDAF not only extracts the recognition characteristics of each of the specific domains, but also complements the strengths of different modalities to obtain a fruitful view. Our iDAF achieves a recognition accuracy of 93.5% at 10 dB and 0.6232 at full signal-to-noise ratio (SNR). The comparative experiments and ablation studies effectively demonstrate the effectiveness and superiority of the iDAF. MDPI 2023-09-28 /pmc/articles/PMC10575420/ /pubmed/37836964 http://dx.doi.org/10.3390/s23198134 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Liu, Bohan
Ge, Ruixing
Zhu, Yuxuan
Zhang, Bolin
Zhang, Xiaokai
Bao, Yanfei
IDAF: Iterative Dual-Scale Attentional Fusion Network for Automatic Modulation Recognition
title IDAF: Iterative Dual-Scale Attentional Fusion Network for Automatic Modulation Recognition
title_full IDAF: Iterative Dual-Scale Attentional Fusion Network for Automatic Modulation Recognition
title_fullStr IDAF: Iterative Dual-Scale Attentional Fusion Network for Automatic Modulation Recognition
title_full_unstemmed IDAF: Iterative Dual-Scale Attentional Fusion Network for Automatic Modulation Recognition
title_short IDAF: Iterative Dual-Scale Attentional Fusion Network for Automatic Modulation Recognition
title_sort idaf: iterative dual-scale attentional fusion network for automatic modulation recognition
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10575420/
https://www.ncbi.nlm.nih.gov/pubmed/37836964
http://dx.doi.org/10.3390/s23198134
work_keys_str_mv AT liubohan idafiterativedualscaleattentionalfusionnetworkforautomaticmodulationrecognition
AT geruixing idafiterativedualscaleattentionalfusionnetworkforautomaticmodulationrecognition
AT zhuyuxuan idafiterativedualscaleattentionalfusionnetworkforautomaticmodulationrecognition
AT zhangbolin idafiterativedualscaleattentionalfusionnetworkforautomaticmodulationrecognition
AT zhangxiaokai idafiterativedualscaleattentionalfusionnetworkforautomaticmodulationrecognition
AT baoyanfei idafiterativedualscaleattentionalfusionnetworkforautomaticmodulationrecognition