Cargando…

Assessment of Critical Feeding Tube Malpositions on Radiographs Using Deep Learning

Assess the efficacy of deep convolutional neural networks (DCNNs) in detection of critical enteric feeding tube malpositions on radiographs. 5475 de-identified HIPAA compliant frontal view chest and abdominal radiographs were obtained, consisting of 174 x-rays of bronchial insertions and 5301 non-cr...

Descripción completa

Detalles Bibliográficos
Autores principales: Singh, Varun, Danda, Varun, Gorniak, Richard, Flanders, Adam, Lakhani, Paras
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6646608/
https://www.ncbi.nlm.nih.gov/pubmed/31073816
http://dx.doi.org/10.1007/s10278-019-00229-9
_version_ 1783437579607080960
author Singh, Varun
Danda, Varun
Gorniak, Richard
Flanders, Adam
Lakhani, Paras
author_facet Singh, Varun
Danda, Varun
Gorniak, Richard
Flanders, Adam
Lakhani, Paras
author_sort Singh, Varun
collection PubMed
description Assess the efficacy of deep convolutional neural networks (DCNNs) in detection of critical enteric feeding tube malpositions on radiographs. 5475 de-identified HIPAA compliant frontal view chest and abdominal radiographs were obtained, consisting of 174 x-rays of bronchial insertions and 5301 non-critical radiographs, including normal course, normal chest, and normal abdominal x-rays. The ground-truth classification for enteric feeding tube placement was performed by two board-certified radiologists. Untrained and pretrained deep convolutional neural network models for Inception V3, ResNet50, and DenseNet 121 were each employed. The radiographs were fed into each deep convolutional neural network, which included untrained and pretrained models. The Tensorflow framework was used for Inception V3, ResNet50, and DenseNet. Images were split into training (4745), validation (630), and test (100). Both real-time and preprocessing image augmentation strategies were performed. Receiver operating characteristic (ROC) and area under the curve (AUC) on the test data were used to assess the models. Statistical differences among the AUCs were obtained. p < 0.05 was considered statistically significant. The pretrained Inception V3, which had an AUC of 0.87 (95 CI; 0.80–0.94), performed statistically significantly better (p < .001) than the untrained Inception V3, with an AUC of 0.60 (95 CI; 0.52–0.68). The pretrained Inception V3 also had the highest AUC overall, as compared with ResNet50 and DenseNet121, with AUC values ranging from 0.82 to 0.85. Each pretrained network outperformed its untrained counterpart. (p < 0.05). Deep learning demonstrates promise in differentiating critical vs. non-critical placement with an AUC of 0.87. Pretrained networks outperformed untrained ones in all cases. DCNNs may allow for more rapid identification and communication of critical feeding tube malpositions.
format Online
Article
Text
id pubmed-6646608
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-66466082019-08-14 Assessment of Critical Feeding Tube Malpositions on Radiographs Using Deep Learning Singh, Varun Danda, Varun Gorniak, Richard Flanders, Adam Lakhani, Paras J Digit Imaging Article Assess the efficacy of deep convolutional neural networks (DCNNs) in detection of critical enteric feeding tube malpositions on radiographs. 5475 de-identified HIPAA compliant frontal view chest and abdominal radiographs were obtained, consisting of 174 x-rays of bronchial insertions and 5301 non-critical radiographs, including normal course, normal chest, and normal abdominal x-rays. The ground-truth classification for enteric feeding tube placement was performed by two board-certified radiologists. Untrained and pretrained deep convolutional neural network models for Inception V3, ResNet50, and DenseNet 121 were each employed. The radiographs were fed into each deep convolutional neural network, which included untrained and pretrained models. The Tensorflow framework was used for Inception V3, ResNet50, and DenseNet. Images were split into training (4745), validation (630), and test (100). Both real-time and preprocessing image augmentation strategies were performed. Receiver operating characteristic (ROC) and area under the curve (AUC) on the test data were used to assess the models. Statistical differences among the AUCs were obtained. p < 0.05 was considered statistically significant. The pretrained Inception V3, which had an AUC of 0.87 (95 CI; 0.80–0.94), performed statistically significantly better (p < .001) than the untrained Inception V3, with an AUC of 0.60 (95 CI; 0.52–0.68). The pretrained Inception V3 also had the highest AUC overall, as compared with ResNet50 and DenseNet121, with AUC values ranging from 0.82 to 0.85. Each pretrained network outperformed its untrained counterpart. (p < 0.05). Deep learning demonstrates promise in differentiating critical vs. non-critical placement with an AUC of 0.87. Pretrained networks outperformed untrained ones in all cases. DCNNs may allow for more rapid identification and communication of critical feeding tube malpositions. Springer International Publishing 2019-05-09 2019-08 /pmc/articles/PMC6646608/ /pubmed/31073816 http://dx.doi.org/10.1007/s10278-019-00229-9 Text en © The Author(s) 2019 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
spellingShingle Article
Singh, Varun
Danda, Varun
Gorniak, Richard
Flanders, Adam
Lakhani, Paras
Assessment of Critical Feeding Tube Malpositions on Radiographs Using Deep Learning
title Assessment of Critical Feeding Tube Malpositions on Radiographs Using Deep Learning
title_full Assessment of Critical Feeding Tube Malpositions on Radiographs Using Deep Learning
title_fullStr Assessment of Critical Feeding Tube Malpositions on Radiographs Using Deep Learning
title_full_unstemmed Assessment of Critical Feeding Tube Malpositions on Radiographs Using Deep Learning
title_short Assessment of Critical Feeding Tube Malpositions on Radiographs Using Deep Learning
title_sort assessment of critical feeding tube malpositions on radiographs using deep learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6646608/
https://www.ncbi.nlm.nih.gov/pubmed/31073816
http://dx.doi.org/10.1007/s10278-019-00229-9
work_keys_str_mv AT singhvarun assessmentofcriticalfeedingtubemalpositionsonradiographsusingdeeplearning
AT dandavarun assessmentofcriticalfeedingtubemalpositionsonradiographsusingdeeplearning
AT gorniakrichard assessmentofcriticalfeedingtubemalpositionsonradiographsusingdeeplearning
AT flandersadam assessmentofcriticalfeedingtubemalpositionsonradiographsusingdeeplearning
AT lakhaniparas assessmentofcriticalfeedingtubemalpositionsonradiographsusingdeeplearning