Cargando…
Micro SleepNet: efficient deep learning model for mobile terminal real-time sleep staging
The real-time sleep staging algorithm that can perform inference on mobile devices without burden is a prerequisite for closed-loop sleep modulation. However, current deep learning sleep staging models have poor real-time efficiency and redundant parameters. We propose a lightweight and high-perform...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10416229/ https://www.ncbi.nlm.nih.gov/pubmed/37575302 http://dx.doi.org/10.3389/fnins.2023.1218072 |
_version_ | 1785087726031208448 |
---|---|
author | Liu, Guisong Wei, Guoliang Sun, Shuqing Mao, Dandan Zhang, Jiansong Zhao, Dechun Tian, Xuelong Wang, Xing Chen, Nanxi |
author_facet | Liu, Guisong Wei, Guoliang Sun, Shuqing Mao, Dandan Zhang, Jiansong Zhao, Dechun Tian, Xuelong Wang, Xing Chen, Nanxi |
author_sort | Liu, Guisong |
collection | PubMed |
description | The real-time sleep staging algorithm that can perform inference on mobile devices without burden is a prerequisite for closed-loop sleep modulation. However, current deep learning sleep staging models have poor real-time efficiency and redundant parameters. We propose a lightweight and high-performance sleep staging model named Micro SleepNet, which takes a 30-s electroencephalography (EEG) epoch as input, without relying on contextual signals. The model features a one-dimensional group convolution with a kernel size of 1 × 3 and an Efficient Channel and Spatial Attention (ECSA) module for feature extraction and adaptive recalibration. Moreover, the model efficiently performs feature fusion using dilated convolution module and replaces the conventional fully connected layer with Global Average Pooling (GAP). These design choices significantly reduce the total number of model parameters to 48,226, with only approximately 48.95 Million Floating-point Operations per Second (MFLOPs) computation. The proposed model is conducted subject-independent cross-validation on three publicly available datasets, achieving an overall accuracy of up to 83.3%, and the Cohen Kappa is 0.77. Additionally, we introduce Class Activation Mapping (CAM) to visualize the model’s attention to EEG waveforms, which demonstrate the model’s ability to accurately capture feature waveforms of EEG at different sleep stages. This provides a strong interpretability foundation for practical applications. Furthermore, the Micro SleepNet model occupies approximately 100 KB of memory on the Android smartphone and takes only 2.8 ms to infer one EEG epoch, meeting the real-time requirements of sleep staging tasks on mobile devices. Consequently, our proposed model has the potential to serve as a foundation for accurate closed-loop sleep modulation. |
format | Online Article Text |
id | pubmed-10416229 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-104162292023-08-12 Micro SleepNet: efficient deep learning model for mobile terminal real-time sleep staging Liu, Guisong Wei, Guoliang Sun, Shuqing Mao, Dandan Zhang, Jiansong Zhao, Dechun Tian, Xuelong Wang, Xing Chen, Nanxi Front Neurosci Neuroscience The real-time sleep staging algorithm that can perform inference on mobile devices without burden is a prerequisite for closed-loop sleep modulation. However, current deep learning sleep staging models have poor real-time efficiency and redundant parameters. We propose a lightweight and high-performance sleep staging model named Micro SleepNet, which takes a 30-s electroencephalography (EEG) epoch as input, without relying on contextual signals. The model features a one-dimensional group convolution with a kernel size of 1 × 3 and an Efficient Channel and Spatial Attention (ECSA) module for feature extraction and adaptive recalibration. Moreover, the model efficiently performs feature fusion using dilated convolution module and replaces the conventional fully connected layer with Global Average Pooling (GAP). These design choices significantly reduce the total number of model parameters to 48,226, with only approximately 48.95 Million Floating-point Operations per Second (MFLOPs) computation. The proposed model is conducted subject-independent cross-validation on three publicly available datasets, achieving an overall accuracy of up to 83.3%, and the Cohen Kappa is 0.77. Additionally, we introduce Class Activation Mapping (CAM) to visualize the model’s attention to EEG waveforms, which demonstrate the model’s ability to accurately capture feature waveforms of EEG at different sleep stages. This provides a strong interpretability foundation for practical applications. Furthermore, the Micro SleepNet model occupies approximately 100 KB of memory on the Android smartphone and takes only 2.8 ms to infer one EEG epoch, meeting the real-time requirements of sleep staging tasks on mobile devices. Consequently, our proposed model has the potential to serve as a foundation for accurate closed-loop sleep modulation. Frontiers Media S.A. 2023-07-28 /pmc/articles/PMC10416229/ /pubmed/37575302 http://dx.doi.org/10.3389/fnins.2023.1218072 Text en Copyright © 2023 Liu, Wei, Sun, Mao, Zhang, Zhao, Tian, Wang and Chen. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Liu, Guisong Wei, Guoliang Sun, Shuqing Mao, Dandan Zhang, Jiansong Zhao, Dechun Tian, Xuelong Wang, Xing Chen, Nanxi Micro SleepNet: efficient deep learning model for mobile terminal real-time sleep staging |
title | Micro SleepNet: efficient deep learning model for mobile terminal real-time sleep staging |
title_full | Micro SleepNet: efficient deep learning model for mobile terminal real-time sleep staging |
title_fullStr | Micro SleepNet: efficient deep learning model for mobile terminal real-time sleep staging |
title_full_unstemmed | Micro SleepNet: efficient deep learning model for mobile terminal real-time sleep staging |
title_short | Micro SleepNet: efficient deep learning model for mobile terminal real-time sleep staging |
title_sort | micro sleepnet: efficient deep learning model for mobile terminal real-time sleep staging |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10416229/ https://www.ncbi.nlm.nih.gov/pubmed/37575302 http://dx.doi.org/10.3389/fnins.2023.1218072 |
work_keys_str_mv | AT liuguisong microsleepnetefficientdeeplearningmodelformobileterminalrealtimesleepstaging AT weiguoliang microsleepnetefficientdeeplearningmodelformobileterminalrealtimesleepstaging AT sunshuqing microsleepnetefficientdeeplearningmodelformobileterminalrealtimesleepstaging AT maodandan microsleepnetefficientdeeplearningmodelformobileterminalrealtimesleepstaging AT zhangjiansong microsleepnetefficientdeeplearningmodelformobileterminalrealtimesleepstaging AT zhaodechun microsleepnetefficientdeeplearningmodelformobileterminalrealtimesleepstaging AT tianxuelong microsleepnetefficientdeeplearningmodelformobileterminalrealtimesleepstaging AT wangxing microsleepnetefficientdeeplearningmodelformobileterminalrealtimesleepstaging AT chennanxi microsleepnetefficientdeeplearningmodelformobileterminalrealtimesleepstaging |