Cargando…

End-to-End Sleep Staging Using Nocturnal Sounds from Microphone Chips for Mobile Devices

PURPOSE: Nocturnal sounds contain numerous information and are easily obtainable by a non-contact manner. Sleep staging using nocturnal sounds recorded from common mobile devices may allow daily at-home sleep tracking. The objective of this study is to introduce an end-to-end (sound-to-sleep stages)...

Descripción completa

Detalles Bibliográficos
Autores principales: Hong, Joonki, Tran, Hai Hong, Jung, Jinhwan, Jang, Hyeryung, Lee, Dongheon, Yoon, In-Young, Hong, Jung Kyung, Kim, Jeong-Whun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Dove 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9241996/
https://www.ncbi.nlm.nih.gov/pubmed/35783665
http://dx.doi.org/10.2147/NSS.S361270
_version_ 1784737955652304896
author Hong, Joonki
Tran, Hai Hong
Jung, Jinhwan
Jang, Hyeryung
Lee, Dongheon
Yoon, In-Young
Hong, Jung Kyung
Kim, Jeong-Whun
author_facet Hong, Joonki
Tran, Hai Hong
Jung, Jinhwan
Jang, Hyeryung
Lee, Dongheon
Yoon, In-Young
Hong, Jung Kyung
Kim, Jeong-Whun
author_sort Hong, Joonki
collection PubMed
description PURPOSE: Nocturnal sounds contain numerous information and are easily obtainable by a non-contact manner. Sleep staging using nocturnal sounds recorded from common mobile devices may allow daily at-home sleep tracking. The objective of this study is to introduce an end-to-end (sound-to-sleep stages) deep learning model for sound-based sleep staging designed to work with audio from microphone chips, which are essential in mobile devices such as modern smartphones. PATIENTS AND METHODS: Two different audio datasets were used: audio data routinely recorded by a solitary microphone chip during polysomnography (PSG dataset, N=1154) and audio data recorded by a smartphone (smartphone dataset, N=327). The audio was converted into Mel spectrogram to detect latent temporal frequency patterns of breathing and body movement from ambient noise. The proposed neural network model learns to first extract features from each 30-second epoch and then analyze inter-epoch relationships of extracted features to finally classify the epochs into sleep stages. RESULTS: Our model achieved 70% epoch-by-epoch agreement for 4-class (wake, light, deep, REM) sleep stage classification and robust performance across various signal-to-noise conditions. The model performance was not considerably affected by sleep apnea or periodic limb movement. External validation with smartphone dataset also showed 68% epoch-by-epoch agreement. CONCLUSION: The proposed end-to-end deep learning model shows potential of low-quality sounds recorded from microphone chips to be utilized for sleep staging. Future study using nocturnal sounds recorded from mobile devices at home environment may further confirm the use of mobile device recording as an at-home sleep tracker.
format Online
Article
Text
id pubmed-9241996
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Dove
record_format MEDLINE/PubMed
spelling pubmed-92419962022-06-30 End-to-End Sleep Staging Using Nocturnal Sounds from Microphone Chips for Mobile Devices Hong, Joonki Tran, Hai Hong Jung, Jinhwan Jang, Hyeryung Lee, Dongheon Yoon, In-Young Hong, Jung Kyung Kim, Jeong-Whun Nat Sci Sleep Original Research PURPOSE: Nocturnal sounds contain numerous information and are easily obtainable by a non-contact manner. Sleep staging using nocturnal sounds recorded from common mobile devices may allow daily at-home sleep tracking. The objective of this study is to introduce an end-to-end (sound-to-sleep stages) deep learning model for sound-based sleep staging designed to work with audio from microphone chips, which are essential in mobile devices such as modern smartphones. PATIENTS AND METHODS: Two different audio datasets were used: audio data routinely recorded by a solitary microphone chip during polysomnography (PSG dataset, N=1154) and audio data recorded by a smartphone (smartphone dataset, N=327). The audio was converted into Mel spectrogram to detect latent temporal frequency patterns of breathing and body movement from ambient noise. The proposed neural network model learns to first extract features from each 30-second epoch and then analyze inter-epoch relationships of extracted features to finally classify the epochs into sleep stages. RESULTS: Our model achieved 70% epoch-by-epoch agreement for 4-class (wake, light, deep, REM) sleep stage classification and robust performance across various signal-to-noise conditions. The model performance was not considerably affected by sleep apnea or periodic limb movement. External validation with smartphone dataset also showed 68% epoch-by-epoch agreement. CONCLUSION: The proposed end-to-end deep learning model shows potential of low-quality sounds recorded from microphone chips to be utilized for sleep staging. Future study using nocturnal sounds recorded from mobile devices at home environment may further confirm the use of mobile device recording as an at-home sleep tracker. Dove 2022-06-25 /pmc/articles/PMC9241996/ /pubmed/35783665 http://dx.doi.org/10.2147/NSS.S361270 Text en © 2022 Hong et al. https://creativecommons.org/licenses/by-nc/3.0/This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution – Non Commercial (unported, v3.0) License (http://creativecommons.org/licenses/by-nc/3.0/ (https://creativecommons.org/licenses/by-nc/3.0/) ). By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms (https://www.dovepress.com/terms.php).
spellingShingle Original Research
Hong, Joonki
Tran, Hai Hong
Jung, Jinhwan
Jang, Hyeryung
Lee, Dongheon
Yoon, In-Young
Hong, Jung Kyung
Kim, Jeong-Whun
End-to-End Sleep Staging Using Nocturnal Sounds from Microphone Chips for Mobile Devices
title End-to-End Sleep Staging Using Nocturnal Sounds from Microphone Chips for Mobile Devices
title_full End-to-End Sleep Staging Using Nocturnal Sounds from Microphone Chips for Mobile Devices
title_fullStr End-to-End Sleep Staging Using Nocturnal Sounds from Microphone Chips for Mobile Devices
title_full_unstemmed End-to-End Sleep Staging Using Nocturnal Sounds from Microphone Chips for Mobile Devices
title_short End-to-End Sleep Staging Using Nocturnal Sounds from Microphone Chips for Mobile Devices
title_sort end-to-end sleep staging using nocturnal sounds from microphone chips for mobile devices
topic Original Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9241996/
https://www.ncbi.nlm.nih.gov/pubmed/35783665
http://dx.doi.org/10.2147/NSS.S361270
work_keys_str_mv AT hongjoonki endtoendsleepstagingusingnocturnalsoundsfrommicrophonechipsformobiledevices
AT tranhaihong endtoendsleepstagingusingnocturnalsoundsfrommicrophonechipsformobiledevices
AT jungjinhwan endtoendsleepstagingusingnocturnalsoundsfrommicrophonechipsformobiledevices
AT janghyeryung endtoendsleepstagingusingnocturnalsoundsfrommicrophonechipsformobiledevices
AT leedongheon endtoendsleepstagingusingnocturnalsoundsfrommicrophonechipsformobiledevices
AT yooninyoung endtoendsleepstagingusingnocturnalsoundsfrommicrophonechipsformobiledevices
AT hongjungkyung endtoendsleepstagingusingnocturnalsoundsfrommicrophonechipsformobiledevices
AT kimjeongwhun endtoendsleepstagingusingnocturnalsoundsfrommicrophonechipsformobiledevices