Cargando…

Robust in-vehicle respiratory rate detection using multimodal signal fusion

Continuous health monitoring in private spaces such as the car is not yet fully exploited to detect diseases in an early stage. Therefore, we develop a redundant health monitoring sensor system and signal fusion approaches to determine the respiratory rate during driving. To recognise the breathing...

Descripción completa

Detalles Bibliográficos
Autores principales: Warnecke, Joana M., Lasenby, Joan, Deserno, Thomas M.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10665475/
https://www.ncbi.nlm.nih.gov/pubmed/37993552
http://dx.doi.org/10.1038/s41598-023-47504-y
_version_ 1785148878258962432
author Warnecke, Joana M.
Lasenby, Joan
Deserno, Thomas M.
author_facet Warnecke, Joana M.
Lasenby, Joan
Deserno, Thomas M.
author_sort Warnecke, Joana M.
collection PubMed
description Continuous health monitoring in private spaces such as the car is not yet fully exploited to detect diseases in an early stage. Therefore, we develop a redundant health monitoring sensor system and signal fusion approaches to determine the respiratory rate during driving. To recognise the breathing movements, we use a piezoelectric sensor, two accelerometers attached to the seat and the seat belt, and a camera behind the windscreen. We record data from 15 subjects during three driving scenarios (15 min each) city, highway, and countryside. An additional chest belt provides the ground truth. We compare the four convolutional neural network (CNN)-based fusion approaches: early, sensor-based late, signal-based late, and hybrid fusion. We evaluate the performance of fusing for all four signals to determine the portion of driving time and the signal combination. The hybrid algorithm fusing all four signals is most effective in detecting respiratory rates in the city ([Formula: see text] ), highway ([Formula: see text] ), and countryside ([Formula: see text] ). In summary, 60% of the total driving time can be used to measure the respiratory rate. The number of signals used in the multi-signal fusion improves reliability and enables continuous health monitoring in a driving vehicle.
format Online
Article
Text
id pubmed-10665475
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-106654752023-11-22 Robust in-vehicle respiratory rate detection using multimodal signal fusion Warnecke, Joana M. Lasenby, Joan Deserno, Thomas M. Sci Rep Article Continuous health monitoring in private spaces such as the car is not yet fully exploited to detect diseases in an early stage. Therefore, we develop a redundant health monitoring sensor system and signal fusion approaches to determine the respiratory rate during driving. To recognise the breathing movements, we use a piezoelectric sensor, two accelerometers attached to the seat and the seat belt, and a camera behind the windscreen. We record data from 15 subjects during three driving scenarios (15 min each) city, highway, and countryside. An additional chest belt provides the ground truth. We compare the four convolutional neural network (CNN)-based fusion approaches: early, sensor-based late, signal-based late, and hybrid fusion. We evaluate the performance of fusing for all four signals to determine the portion of driving time and the signal combination. The hybrid algorithm fusing all four signals is most effective in detecting respiratory rates in the city ([Formula: see text] ), highway ([Formula: see text] ), and countryside ([Formula: see text] ). In summary, 60% of the total driving time can be used to measure the respiratory rate. The number of signals used in the multi-signal fusion improves reliability and enables continuous health monitoring in a driving vehicle. Nature Publishing Group UK 2023-11-22 /pmc/articles/PMC10665475/ /pubmed/37993552 http://dx.doi.org/10.1038/s41598-023-47504-y Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Warnecke, Joana M.
Lasenby, Joan
Deserno, Thomas M.
Robust in-vehicle respiratory rate detection using multimodal signal fusion
title Robust in-vehicle respiratory rate detection using multimodal signal fusion
title_full Robust in-vehicle respiratory rate detection using multimodal signal fusion
title_fullStr Robust in-vehicle respiratory rate detection using multimodal signal fusion
title_full_unstemmed Robust in-vehicle respiratory rate detection using multimodal signal fusion
title_short Robust in-vehicle respiratory rate detection using multimodal signal fusion
title_sort robust in-vehicle respiratory rate detection using multimodal signal fusion
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10665475/
https://www.ncbi.nlm.nih.gov/pubmed/37993552
http://dx.doi.org/10.1038/s41598-023-47504-y
work_keys_str_mv AT warneckejoanam robustinvehiclerespiratoryratedetectionusingmultimodalsignalfusion
AT lasenbyjoan robustinvehiclerespiratoryratedetectionusingmultimodalsignalfusion
AT desernothomasm robustinvehiclerespiratoryratedetectionusingmultimodalsignalfusion