Cargando…

Cardio-respiratory signal extraction from video camera data for continuous non-contact vital sign monitoring using deep learning

Non-contact vital sign monitoring enables the estimation of vital signs, such as heart rate, respiratory rate and oxygen saturation (SpO(2)), by measuring subtle color changes on the skin surface using a video camera. For patients in a hospital ward, the main challenges in the development of continu...

Descripción completa

Detalles Bibliográficos
Autores principales: Chaichulee, Sitthichok, Villarroel, Mauricio, Jorge, João, Arteta, Carlos, McCormick, Kenny, Zisserman, Andrew, Tarassenko, Lionel
Formato: Online Artículo Texto
Lenguaje:English
Publicado: IOP Publishing 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7655150/
https://www.ncbi.nlm.nih.gov/pubmed/31661680
http://dx.doi.org/10.1088/1361-6579/ab525c
_version_ 1783608181265530880
author Chaichulee, Sitthichok
Villarroel, Mauricio
Jorge, João
Arteta, Carlos
McCormick, Kenny
Zisserman, Andrew
Tarassenko, Lionel
author_facet Chaichulee, Sitthichok
Villarroel, Mauricio
Jorge, João
Arteta, Carlos
McCormick, Kenny
Zisserman, Andrew
Tarassenko, Lionel
author_sort Chaichulee, Sitthichok
collection PubMed
description Non-contact vital sign monitoring enables the estimation of vital signs, such as heart rate, respiratory rate and oxygen saturation (SpO(2)), by measuring subtle color changes on the skin surface using a video camera. For patients in a hospital ward, the main challenges in the development of continuous and robust non-contact monitoring techniques are the identification of time periods and the segmentation of skin regions of interest (ROIs) from which vital signs can be estimated. We propose a deep learning framework to tackle these challenges. Approach: This paper presents two convolutional neural network (CNN) models. The first network was designed for detecting the presence of a patient and segmenting the patient’s skin area. The second network combined the output from the first network with optical flow for identifying time periods of clinical intervention so that these periods can be excluded from the estimation of vital signs. Both networks were trained using video recordings from a clinical study involving 15 pre-term infants conducted in the high dependency area of the neonatal intensive care unit (NICU) of the John Radcliffe Hospital in Oxford, UK. Main results: Our proposed methods achieved an accuracy of 98.8% for patient detection, a mean intersection-over-union (IOU) score of 88.6% for skin segmentation and an accuracy of 94.5% for clinical intervention detection using two-fold cross validation. Our deep learning models produced accurate results and were robust to different skin tones, changes in light conditions, pose variations and different clinical interventions by medical staff and family visitors. Significance: Our approach allows cardio-respiratory signals to be continuously derived from the patient’s skin during which the patient is present and no clinical intervention is undertaken.
format Online
Article
Text
id pubmed-7655150
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher IOP Publishing
record_format MEDLINE/PubMed
spelling pubmed-76551502020-11-12 Cardio-respiratory signal extraction from video camera data for continuous non-contact vital sign monitoring using deep learning Chaichulee, Sitthichok Villarroel, Mauricio Jorge, João Arteta, Carlos McCormick, Kenny Zisserman, Andrew Tarassenko, Lionel Physiol Meas Paper Non-contact vital sign monitoring enables the estimation of vital signs, such as heart rate, respiratory rate and oxygen saturation (SpO(2)), by measuring subtle color changes on the skin surface using a video camera. For patients in a hospital ward, the main challenges in the development of continuous and robust non-contact monitoring techniques are the identification of time periods and the segmentation of skin regions of interest (ROIs) from which vital signs can be estimated. We propose a deep learning framework to tackle these challenges. Approach: This paper presents two convolutional neural network (CNN) models. The first network was designed for detecting the presence of a patient and segmenting the patient’s skin area. The second network combined the output from the first network with optical flow for identifying time periods of clinical intervention so that these periods can be excluded from the estimation of vital signs. Both networks were trained using video recordings from a clinical study involving 15 pre-term infants conducted in the high dependency area of the neonatal intensive care unit (NICU) of the John Radcliffe Hospital in Oxford, UK. Main results: Our proposed methods achieved an accuracy of 98.8% for patient detection, a mean intersection-over-union (IOU) score of 88.6% for skin segmentation and an accuracy of 94.5% for clinical intervention detection using two-fold cross validation. Our deep learning models produced accurate results and were robust to different skin tones, changes in light conditions, pose variations and different clinical interventions by medical staff and family visitors. Significance: Our approach allows cardio-respiratory signals to be continuously derived from the patient’s skin during which the patient is present and no clinical intervention is undertaken. IOP Publishing 2019-11 2019-12-02 /pmc/articles/PMC7655150/ /pubmed/31661680 http://dx.doi.org/10.1088/1361-6579/ab525c Text en © 2019 Institute of Physics and Engineering in Medicine http://creativecommons.org/licenses/by/3.0/ Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence (http://creativecommons.org/licenses/by/3.0) . Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
spellingShingle Paper
Chaichulee, Sitthichok
Villarroel, Mauricio
Jorge, João
Arteta, Carlos
McCormick, Kenny
Zisserman, Andrew
Tarassenko, Lionel
Cardio-respiratory signal extraction from video camera data for continuous non-contact vital sign monitoring using deep learning
title Cardio-respiratory signal extraction from video camera data for continuous non-contact vital sign monitoring using deep learning
title_full Cardio-respiratory signal extraction from video camera data for continuous non-contact vital sign monitoring using deep learning
title_fullStr Cardio-respiratory signal extraction from video camera data for continuous non-contact vital sign monitoring using deep learning
title_full_unstemmed Cardio-respiratory signal extraction from video camera data for continuous non-contact vital sign monitoring using deep learning
title_short Cardio-respiratory signal extraction from video camera data for continuous non-contact vital sign monitoring using deep learning
title_sort cardio-respiratory signal extraction from video camera data for continuous non-contact vital sign monitoring using deep learning
topic Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7655150/
https://www.ncbi.nlm.nih.gov/pubmed/31661680
http://dx.doi.org/10.1088/1361-6579/ab525c
work_keys_str_mv AT chaichuleesitthichok cardiorespiratorysignalextractionfromvideocameradataforcontinuousnoncontactvitalsignmonitoringusingdeeplearning
AT villarroelmauricio cardiorespiratorysignalextractionfromvideocameradataforcontinuousnoncontactvitalsignmonitoringusingdeeplearning
AT jorgejoao cardiorespiratorysignalextractionfromvideocameradataforcontinuousnoncontactvitalsignmonitoringusingdeeplearning
AT artetacarlos cardiorespiratorysignalextractionfromvideocameradataforcontinuousnoncontactvitalsignmonitoringusingdeeplearning
AT mccormickkenny cardiorespiratorysignalextractionfromvideocameradataforcontinuousnoncontactvitalsignmonitoringusingdeeplearning
AT zissermanandrew cardiorespiratorysignalextractionfromvideocameradataforcontinuousnoncontactvitalsignmonitoringusingdeeplearning
AT tarassenkolionel cardiorespiratorysignalextractionfromvideocameradataforcontinuousnoncontactvitalsignmonitoringusingdeeplearning