Cargando…

Fast body part segmentation and tracking of neonatal video data using deep learning

Photoplethysmography imaging (PPGI) for non-contact monitoring of preterm infants in the neonatal intensive care unit (NICU) is a promising technology, as it could reduce medical adhesive-related skin injuries and associated complications. For practical implementations of PPGI, a region of interest...

Descripción completa

Detalles Bibliográficos
Autores principales: Antink, Christoph Hoog, Ferreira, Joana Carlos Mesquita, Paul, Michael, Lyra, Simon, Heimann, Konrad, Karthik, Srinivasa, Joseph, Jayaraj, Jayaraman, Kumutha, Orlikowsky, Thorsten, Sivaprakasam, Mohanasankar, Leonhardt, Steffen
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Berlin Heidelberg 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7679364/
https://www.ncbi.nlm.nih.gov/pubmed/33094430
http://dx.doi.org/10.1007/s11517-020-02251-4
_version_ 1783612325876465664
author Antink, Christoph Hoog
Ferreira, Joana Carlos Mesquita
Paul, Michael
Lyra, Simon
Heimann, Konrad
Karthik, Srinivasa
Joseph, Jayaraj
Jayaraman, Kumutha
Orlikowsky, Thorsten
Sivaprakasam, Mohanasankar
Leonhardt, Steffen
author_facet Antink, Christoph Hoog
Ferreira, Joana Carlos Mesquita
Paul, Michael
Lyra, Simon
Heimann, Konrad
Karthik, Srinivasa
Joseph, Jayaraj
Jayaraman, Kumutha
Orlikowsky, Thorsten
Sivaprakasam, Mohanasankar
Leonhardt, Steffen
author_sort Antink, Christoph Hoog
collection PubMed
description Photoplethysmography imaging (PPGI) for non-contact monitoring of preterm infants in the neonatal intensive care unit (NICU) is a promising technology, as it could reduce medical adhesive-related skin injuries and associated complications. For practical implementations of PPGI, a region of interest has to be detected automatically in real time. As the neonates’ body proportions differ significantly from adults, existing approaches may not be used in a straightforward way, and color-based skin detection requires RGB data, thus prohibiting the use of less-intrusive near-infrared (NIR) acquisition. In this paper, we present a deep learning-based method for segmentation of neonatal video data. We augmented an existing encoder-decoder semantic segmentation method with a modified version of the ResNet-50 encoder. This reduced the computational time by a factor of 7.5, so that 30 frames per second can be processed at 960 × 576 pixels. The method was developed and optimized on publicly available databases with segmentation data from adults. For evaluation, a comprehensive dataset consisting of RGB and NIR video recordings from 29 neonates with various skin tones recorded in two NICUs in Germany and India was used. From all recordings, 643 frames were manually segmented. After pre-training the model on the public adult data, parts of the neonatal data were used for additional learning and left-out neonates are used for cross-validated evaluation. On the RGB data, the head is segmented well (82% intersection over union, 88% accuracy), and performance is comparable with those achieved on large, public, non-neonatal datasets. On the other hand, performance on the NIR data was inferior. By employing data augmentation to generate additional virtual NIR data for training, results could be improved and the head could be segmented with 62% intersection over union and 65% accuracy. The method is in theory capable of performing segmentation in real time and thus it may provide a useful tool for future PPGI applications. [Figure: see text]
format Online
Article
Text
id pubmed-7679364
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Springer Berlin Heidelberg
record_format MEDLINE/PubMed
spelling pubmed-76793642020-11-23 Fast body part segmentation and tracking of neonatal video data using deep learning Antink, Christoph Hoog Ferreira, Joana Carlos Mesquita Paul, Michael Lyra, Simon Heimann, Konrad Karthik, Srinivasa Joseph, Jayaraj Jayaraman, Kumutha Orlikowsky, Thorsten Sivaprakasam, Mohanasankar Leonhardt, Steffen Med Biol Eng Comput Original Article Photoplethysmography imaging (PPGI) for non-contact monitoring of preterm infants in the neonatal intensive care unit (NICU) is a promising technology, as it could reduce medical adhesive-related skin injuries and associated complications. For practical implementations of PPGI, a region of interest has to be detected automatically in real time. As the neonates’ body proportions differ significantly from adults, existing approaches may not be used in a straightforward way, and color-based skin detection requires RGB data, thus prohibiting the use of less-intrusive near-infrared (NIR) acquisition. In this paper, we present a deep learning-based method for segmentation of neonatal video data. We augmented an existing encoder-decoder semantic segmentation method with a modified version of the ResNet-50 encoder. This reduced the computational time by a factor of 7.5, so that 30 frames per second can be processed at 960 × 576 pixels. The method was developed and optimized on publicly available databases with segmentation data from adults. For evaluation, a comprehensive dataset consisting of RGB and NIR video recordings from 29 neonates with various skin tones recorded in two NICUs in Germany and India was used. From all recordings, 643 frames were manually segmented. After pre-training the model on the public adult data, parts of the neonatal data were used for additional learning and left-out neonates are used for cross-validated evaluation. On the RGB data, the head is segmented well (82% intersection over union, 88% accuracy), and performance is comparable with those achieved on large, public, non-neonatal datasets. On the other hand, performance on the NIR data was inferior. By employing data augmentation to generate additional virtual NIR data for training, results could be improved and the head could be segmented with 62% intersection over union and 65% accuracy. The method is in theory capable of performing segmentation in real time and thus it may provide a useful tool for future PPGI applications. [Figure: see text] Springer Berlin Heidelberg 2020-10-23 2020 /pmc/articles/PMC7679364/ /pubmed/33094430 http://dx.doi.org/10.1007/s11517-020-02251-4 Text en © The Author(s) 2020 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Original Article
Antink, Christoph Hoog
Ferreira, Joana Carlos Mesquita
Paul, Michael
Lyra, Simon
Heimann, Konrad
Karthik, Srinivasa
Joseph, Jayaraj
Jayaraman, Kumutha
Orlikowsky, Thorsten
Sivaprakasam, Mohanasankar
Leonhardt, Steffen
Fast body part segmentation and tracking of neonatal video data using deep learning
title Fast body part segmentation and tracking of neonatal video data using deep learning
title_full Fast body part segmentation and tracking of neonatal video data using deep learning
title_fullStr Fast body part segmentation and tracking of neonatal video data using deep learning
title_full_unstemmed Fast body part segmentation and tracking of neonatal video data using deep learning
title_short Fast body part segmentation and tracking of neonatal video data using deep learning
title_sort fast body part segmentation and tracking of neonatal video data using deep learning
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7679364/
https://www.ncbi.nlm.nih.gov/pubmed/33094430
http://dx.doi.org/10.1007/s11517-020-02251-4
work_keys_str_mv AT antinkchristophhoog fastbodypartsegmentationandtrackingofneonatalvideodatausingdeeplearning
AT ferreirajoanacarlosmesquita fastbodypartsegmentationandtrackingofneonatalvideodatausingdeeplearning
AT paulmichael fastbodypartsegmentationandtrackingofneonatalvideodatausingdeeplearning
AT lyrasimon fastbodypartsegmentationandtrackingofneonatalvideodatausingdeeplearning
AT heimannkonrad fastbodypartsegmentationandtrackingofneonatalvideodatausingdeeplearning
AT karthiksrinivasa fastbodypartsegmentationandtrackingofneonatalvideodatausingdeeplearning
AT josephjayaraj fastbodypartsegmentationandtrackingofneonatalvideodatausingdeeplearning
AT jayaramankumutha fastbodypartsegmentationandtrackingofneonatalvideodatausingdeeplearning
AT orlikowskythorsten fastbodypartsegmentationandtrackingofneonatalvideodatausingdeeplearning
AT sivaprakasammohanasankar fastbodypartsegmentationandtrackingofneonatalvideodatausingdeeplearning
AT leonhardtsteffen fastbodypartsegmentationandtrackingofneonatalvideodatausingdeeplearning