Cargando…
Evaluation of multiple open-source deep learning models for detecting and grading COVID-19 on chest radiographs
Purpose: Chest x-rays are complex to report accurately. Viral pneumonia is often subtle in its radiological appearance. In the context of the COVID-19 pandemic, rapid triage of cases and exclusion of other pathologies with artificial intelligence (AI) can assist over-stretched radiology departments....
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Society of Photo-Optical Instrumentation Engineers
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8734487/ https://www.ncbi.nlm.nih.gov/pubmed/35005058 http://dx.doi.org/10.1117/1.JMI.8.6.064502 |
_version_ | 1784628029529522176 |
---|---|
author | Risman, Alexander Trelles, Miguel Denning, David W. |
author_facet | Risman, Alexander Trelles, Miguel Denning, David W. |
author_sort | Risman, Alexander |
collection | PubMed |
description | Purpose: Chest x-rays are complex to report accurately. Viral pneumonia is often subtle in its radiological appearance. In the context of the COVID-19 pandemic, rapid triage of cases and exclusion of other pathologies with artificial intelligence (AI) can assist over-stretched radiology departments. We aim to validate three open-source AI models on an external test set. Approach: We tested three open-source deep learning models, COVID-Net, COVIDNet-S-GEO, and CheXNet for their ability to detect COVID-19 pneumonia and to determine its severity using 129 chest x-rays from two different vendors Phillips and Agfa. Results: All three models detected COVID-19 pneumonia (AUCs from 0.666 to 0.778). Only the COVID Net-S-GEO and CheXNet models performed well on severity scoring (Pearson’s [Formula: see text] 0.927 and 0.833, respectively); COVID-Net only performed well at either task on images taken with a Philips machine (AUC 0.735) and not an Agfa machine (AUC 0.598). Conclusions: Chest x-ray triage using existing machine learning models for COVID-19 pneumonia can be successfully implemented using open-source AI models. Evaluation of the model using local x-ray machines and protocols is highly recommended before implementation to avoid vendor or protocol dependent bias. |
format | Online Article Text |
id | pubmed-8734487 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Society of Photo-Optical Instrumentation Engineers |
record_format | MEDLINE/PubMed |
spelling | pubmed-87344872022-01-07 Evaluation of multiple open-source deep learning models for detecting and grading COVID-19 on chest radiographs Risman, Alexander Trelles, Miguel Denning, David W. J Med Imaging (Bellingham) Computer-Aided Diagnosis Purpose: Chest x-rays are complex to report accurately. Viral pneumonia is often subtle in its radiological appearance. In the context of the COVID-19 pandemic, rapid triage of cases and exclusion of other pathologies with artificial intelligence (AI) can assist over-stretched radiology departments. We aim to validate three open-source AI models on an external test set. Approach: We tested three open-source deep learning models, COVID-Net, COVIDNet-S-GEO, and CheXNet for their ability to detect COVID-19 pneumonia and to determine its severity using 129 chest x-rays from two different vendors Phillips and Agfa. Results: All three models detected COVID-19 pneumonia (AUCs from 0.666 to 0.778). Only the COVID Net-S-GEO and CheXNet models performed well on severity scoring (Pearson’s [Formula: see text] 0.927 and 0.833, respectively); COVID-Net only performed well at either task on images taken with a Philips machine (AUC 0.735) and not an Agfa machine (AUC 0.598). Conclusions: Chest x-ray triage using existing machine learning models for COVID-19 pneumonia can be successfully implemented using open-source AI models. Evaluation of the model using local x-ray machines and protocols is highly recommended before implementation to avoid vendor or protocol dependent bias. Society of Photo-Optical Instrumentation Engineers 2021-12-21 2021-11 /pmc/articles/PMC8734487/ /pubmed/35005058 http://dx.doi.org/10.1117/1.JMI.8.6.064502 Text en © 2021 Society of Photo-Optical Instrumentation Engineers (SPIE) |
spellingShingle | Computer-Aided Diagnosis Risman, Alexander Trelles, Miguel Denning, David W. Evaluation of multiple open-source deep learning models for detecting and grading COVID-19 on chest radiographs |
title | Evaluation of multiple open-source deep learning models for detecting and grading COVID-19 on chest radiographs |
title_full | Evaluation of multiple open-source deep learning models for detecting and grading COVID-19 on chest radiographs |
title_fullStr | Evaluation of multiple open-source deep learning models for detecting and grading COVID-19 on chest radiographs |
title_full_unstemmed | Evaluation of multiple open-source deep learning models for detecting and grading COVID-19 on chest radiographs |
title_short | Evaluation of multiple open-source deep learning models for detecting and grading COVID-19 on chest radiographs |
title_sort | evaluation of multiple open-source deep learning models for detecting and grading covid-19 on chest radiographs |
topic | Computer-Aided Diagnosis |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8734487/ https://www.ncbi.nlm.nih.gov/pubmed/35005058 http://dx.doi.org/10.1117/1.JMI.8.6.064502 |
work_keys_str_mv | AT rismanalexander evaluationofmultipleopensourcedeeplearningmodelsfordetectingandgradingcovid19onchestradiographs AT trellesmiguel evaluationofmultipleopensourcedeeplearningmodelsfordetectingandgradingcovid19onchestradiographs AT denningdavidw evaluationofmultipleopensourcedeeplearningmodelsfordetectingandgradingcovid19onchestradiographs |