Cargando…

A lightweight hybrid vision transformer network for radar-based human activity recognition

Radar-based human activity recognition (HAR) offers a non-contact technique with privacy protection and lighting robustness for many advanced applications. Complex deep neural networks demonstrate significant performance advantages when classifying the radar micro-Doppler signals that have unique co...

Descripción completa

Detalles Bibliográficos
Autores principales: Huan, Sha, Wang, Zhaoyue, Wang, Xiaoqiang, Wu, Limei, Yang, Xiaoxuan, Huang, Hongming, Dai, Gan E.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10590397/
https://www.ncbi.nlm.nih.gov/pubmed/37865672
http://dx.doi.org/10.1038/s41598-023-45149-5
_version_ 1785123979565989888
author Huan, Sha
Wang, Zhaoyue
Wang, Xiaoqiang
Wu, Limei
Yang, Xiaoxuan
Huang, Hongming
Dai, Gan E.
author_facet Huan, Sha
Wang, Zhaoyue
Wang, Xiaoqiang
Wu, Limei
Yang, Xiaoxuan
Huang, Hongming
Dai, Gan E.
author_sort Huan, Sha
collection PubMed
description Radar-based human activity recognition (HAR) offers a non-contact technique with privacy protection and lighting robustness for many advanced applications. Complex deep neural networks demonstrate significant performance advantages when classifying the radar micro-Doppler signals that have unique correspondences with human behavior. However, in embedded applications, the demand for lightweight and low latency poses challenges to the radar-based HAR network construction. In this paper, an efficient network based on a lightweight hybrid Vision Transformer (LH-ViT) is proposed to address the HAR accuracy and network lightweight simultaneously. This network combines the efficient convolution operations with the strength of the self-attention mechanism in ViT. Feature Pyramid architecture is applied for the multi-scale feature extraction for the micro-Doppler map. Feature enhancement is executed by the stacked Radar-ViT subsequently, in which the fold and unfold operations are added to lower the computational load of the attention mechanism. The convolution operator in the LH-ViT is replaced by the RES-SE block, an efficient structure that combines the residual learning framework with the Squeeze-and-Excitation network. Experiments based on two human activity datasets indicate our method’s advantages in terms of expressiveness and computing efficiency over traditional methods.
format Online
Article
Text
id pubmed-10590397
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-105903972023-10-23 A lightweight hybrid vision transformer network for radar-based human activity recognition Huan, Sha Wang, Zhaoyue Wang, Xiaoqiang Wu, Limei Yang, Xiaoxuan Huang, Hongming Dai, Gan E. Sci Rep Article Radar-based human activity recognition (HAR) offers a non-contact technique with privacy protection and lighting robustness for many advanced applications. Complex deep neural networks demonstrate significant performance advantages when classifying the radar micro-Doppler signals that have unique correspondences with human behavior. However, in embedded applications, the demand for lightweight and low latency poses challenges to the radar-based HAR network construction. In this paper, an efficient network based on a lightweight hybrid Vision Transformer (LH-ViT) is proposed to address the HAR accuracy and network lightweight simultaneously. This network combines the efficient convolution operations with the strength of the self-attention mechanism in ViT. Feature Pyramid architecture is applied for the multi-scale feature extraction for the micro-Doppler map. Feature enhancement is executed by the stacked Radar-ViT subsequently, in which the fold and unfold operations are added to lower the computational load of the attention mechanism. The convolution operator in the LH-ViT is replaced by the RES-SE block, an efficient structure that combines the residual learning framework with the Squeeze-and-Excitation network. Experiments based on two human activity datasets indicate our method’s advantages in terms of expressiveness and computing efficiency over traditional methods. Nature Publishing Group UK 2023-10-21 /pmc/articles/PMC10590397/ /pubmed/37865672 http://dx.doi.org/10.1038/s41598-023-45149-5 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Huan, Sha
Wang, Zhaoyue
Wang, Xiaoqiang
Wu, Limei
Yang, Xiaoxuan
Huang, Hongming
Dai, Gan E.
A lightweight hybrid vision transformer network for radar-based human activity recognition
title A lightweight hybrid vision transformer network for radar-based human activity recognition
title_full A lightweight hybrid vision transformer network for radar-based human activity recognition
title_fullStr A lightweight hybrid vision transformer network for radar-based human activity recognition
title_full_unstemmed A lightweight hybrid vision transformer network for radar-based human activity recognition
title_short A lightweight hybrid vision transformer network for radar-based human activity recognition
title_sort lightweight hybrid vision transformer network for radar-based human activity recognition
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10590397/
https://www.ncbi.nlm.nih.gov/pubmed/37865672
http://dx.doi.org/10.1038/s41598-023-45149-5
work_keys_str_mv AT huansha alightweighthybridvisiontransformernetworkforradarbasedhumanactivityrecognition
AT wangzhaoyue alightweighthybridvisiontransformernetworkforradarbasedhumanactivityrecognition
AT wangxiaoqiang alightweighthybridvisiontransformernetworkforradarbasedhumanactivityrecognition
AT wulimei alightweighthybridvisiontransformernetworkforradarbasedhumanactivityrecognition
AT yangxiaoxuan alightweighthybridvisiontransformernetworkforradarbasedhumanactivityrecognition
AT huanghongming alightweighthybridvisiontransformernetworkforradarbasedhumanactivityrecognition
AT daigane alightweighthybridvisiontransformernetworkforradarbasedhumanactivityrecognition
AT huansha lightweighthybridvisiontransformernetworkforradarbasedhumanactivityrecognition
AT wangzhaoyue lightweighthybridvisiontransformernetworkforradarbasedhumanactivityrecognition
AT wangxiaoqiang lightweighthybridvisiontransformernetworkforradarbasedhumanactivityrecognition
AT wulimei lightweighthybridvisiontransformernetworkforradarbasedhumanactivityrecognition
AT yangxiaoxuan lightweighthybridvisiontransformernetworkforradarbasedhumanactivityrecognition
AT huanghongming lightweighthybridvisiontransformernetworkforradarbasedhumanactivityrecognition
AT daigane lightweighthybridvisiontransformernetworkforradarbasedhumanactivityrecognition