Cargando…

A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays

Chest radiographs (CXRs) are the most widely available radiographic imaging modality used to detect respiratory diseases that result in lung opacities. CXR reports often use non-standardized language that result in subjective, qualitative, and non-reproducible opacity estimates. Our goal was to deve...

Descripción completa

Detalles Bibliográficos
Autores principales: Vardhan, Avantika, Makhnevich, Alex, Omprakash, Pravan, Hirschorn, David, Barish, Matthew, Cohen, Stuart L., Zanos, Theodoros P.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9809517/
https://www.ncbi.nlm.nih.gov/pubmed/36597113
http://dx.doi.org/10.1186/s42234-022-00103-0
_version_ 1784863143557595136
author Vardhan, Avantika
Makhnevich, Alex
Omprakash, Pravan
Hirschorn, David
Barish, Matthew
Cohen, Stuart L.
Zanos, Theodoros P.
author_facet Vardhan, Avantika
Makhnevich, Alex
Omprakash, Pravan
Hirschorn, David
Barish, Matthew
Cohen, Stuart L.
Zanos, Theodoros P.
author_sort Vardhan, Avantika
collection PubMed
description Chest radiographs (CXRs) are the most widely available radiographic imaging modality used to detect respiratory diseases that result in lung opacities. CXR reports often use non-standardized language that result in subjective, qualitative, and non-reproducible opacity estimates. Our goal was to develop a robust deep transfer learning framework and adapt it to estimate the degree of lung opacity from CXRs. Following CXR data selection based on exclusion criteria, segmentation schemes were used for ROI (Region Of Interest) extraction, and all combinations of segmentation, data balancing, and classification methods were tested to pick the top performing models. Multifold cross validation was used to determine the best model from the initial selected top models, based on appropriate performance metrics, as well as a novel Macro-Averaged Heatmap Concordance Score (MA HCS). Performance of the best model is compared against that of expert physician annotators, and heatmaps were produced. Finally, model performance sensitivity analysis across patient populations of interest was performed. The proposed framework was adapted to the specific use case of estimation of degree of CXR lung opacity using ordinal multiclass classification. Acquired between March 24, 2020, and May 22, 2020, 38,365 prospectively annotated CXRs from 17,418 patients were used. We tested three neural network architectures (ResNet-50, VGG-16, and ChexNet), three segmentation schemes (no segmentation, lung segmentation, and lateral segmentation based on spine detection), and three data balancing strategies (undersampling, double-stage sampling, and synthetic minority oversampling) using 38,079 CXR images for training, and validation with 286 images as the out-of-the-box dataset that underwent expert radiologist adjudication. Based on the results of these experiments, the ResNet-50 model with undersampling and no ROI segmentation is recommended for lung opacity classification, based on optimal values for the MAE metric and HCS (Heatmap Concordance Score). The degree of agreement between the opacity scores predicted by this model with respect to the two sets of radiologist scores (OR or Original Reader and OOBTR or Out Of Box Reader) in terms of performance metrics is superior to the inter-radiologist opacity score agreement.
format Online
Article
Text
id pubmed-9809517
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-98095172023-01-04 A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays Vardhan, Avantika Makhnevich, Alex Omprakash, Pravan Hirschorn, David Barish, Matthew Cohen, Stuart L. Zanos, Theodoros P. Bioelectron Med Research Article Chest radiographs (CXRs) are the most widely available radiographic imaging modality used to detect respiratory diseases that result in lung opacities. CXR reports often use non-standardized language that result in subjective, qualitative, and non-reproducible opacity estimates. Our goal was to develop a robust deep transfer learning framework and adapt it to estimate the degree of lung opacity from CXRs. Following CXR data selection based on exclusion criteria, segmentation schemes were used for ROI (Region Of Interest) extraction, and all combinations of segmentation, data balancing, and classification methods were tested to pick the top performing models. Multifold cross validation was used to determine the best model from the initial selected top models, based on appropriate performance metrics, as well as a novel Macro-Averaged Heatmap Concordance Score (MA HCS). Performance of the best model is compared against that of expert physician annotators, and heatmaps were produced. Finally, model performance sensitivity analysis across patient populations of interest was performed. The proposed framework was adapted to the specific use case of estimation of degree of CXR lung opacity using ordinal multiclass classification. Acquired between March 24, 2020, and May 22, 2020, 38,365 prospectively annotated CXRs from 17,418 patients were used. We tested three neural network architectures (ResNet-50, VGG-16, and ChexNet), three segmentation schemes (no segmentation, lung segmentation, and lateral segmentation based on spine detection), and three data balancing strategies (undersampling, double-stage sampling, and synthetic minority oversampling) using 38,079 CXR images for training, and validation with 286 images as the out-of-the-box dataset that underwent expert radiologist adjudication. Based on the results of these experiments, the ResNet-50 model with undersampling and no ROI segmentation is recommended for lung opacity classification, based on optimal values for the MAE metric and HCS (Heatmap Concordance Score). The degree of agreement between the opacity scores predicted by this model with respect to the two sets of radiologist scores (OR or Original Reader and OOBTR or Out Of Box Reader) in terms of performance metrics is superior to the inter-radiologist opacity score agreement. BioMed Central 2023-01-03 /pmc/articles/PMC9809517/ /pubmed/36597113 http://dx.doi.org/10.1186/s42234-022-00103-0 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Research Article
Vardhan, Avantika
Makhnevich, Alex
Omprakash, Pravan
Hirschorn, David
Barish, Matthew
Cohen, Stuart L.
Zanos, Theodoros P.
A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays
title A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays
title_full A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays
title_fullStr A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays
title_full_unstemmed A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays
title_short A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays
title_sort radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9809517/
https://www.ncbi.nlm.nih.gov/pubmed/36597113
http://dx.doi.org/10.1186/s42234-022-00103-0
work_keys_str_mv AT vardhanavantika aradiographicdeeptransferlearningframeworkadaptedtoestimatelungopacitiesfromchestxrays
AT makhnevichalex aradiographicdeeptransferlearningframeworkadaptedtoestimatelungopacitiesfromchestxrays
AT omprakashpravan aradiographicdeeptransferlearningframeworkadaptedtoestimatelungopacitiesfromchestxrays
AT hirschorndavid aradiographicdeeptransferlearningframeworkadaptedtoestimatelungopacitiesfromchestxrays
AT barishmatthew aradiographicdeeptransferlearningframeworkadaptedtoestimatelungopacitiesfromchestxrays
AT cohenstuartl aradiographicdeeptransferlearningframeworkadaptedtoestimatelungopacitiesfromchestxrays
AT zanostheodorosp aradiographicdeeptransferlearningframeworkadaptedtoestimatelungopacitiesfromchestxrays
AT vardhanavantika radiographicdeeptransferlearningframeworkadaptedtoestimatelungopacitiesfromchestxrays
AT makhnevichalex radiographicdeeptransferlearningframeworkadaptedtoestimatelungopacitiesfromchestxrays
AT omprakashpravan radiographicdeeptransferlearningframeworkadaptedtoestimatelungopacitiesfromchestxrays
AT hirschorndavid radiographicdeeptransferlearningframeworkadaptedtoestimatelungopacitiesfromchestxrays
AT barishmatthew radiographicdeeptransferlearningframeworkadaptedtoestimatelungopacitiesfromchestxrays
AT cohenstuartl radiographicdeeptransferlearningframeworkadaptedtoestimatelungopacitiesfromchestxrays
AT zanostheodorosp radiographicdeeptransferlearningframeworkadaptedtoestimatelungopacitiesfromchestxrays