Cargando…
Artificial intelligence-supported lung cancer detection by multi-institutional readers with multi-vendor chest radiographs: a retrospective clinical validation study
BACKGROUND: We investigated the performance improvement of physicians with varying levels of chest radiology experience when using a commercially available artificial intelligence (AI)-based computer-assisted detection (CAD) software to detect lung cancer nodules on chest radiographs from multiple v...
Autores principales: | , , , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8524996/ https://www.ncbi.nlm.nih.gov/pubmed/34663260 http://dx.doi.org/10.1186/s12885-021-08847-9 |
_version_ | 1784585587119882240 |
---|---|
author | Ueda, Daiju Yamamoto, Akira Shimazaki, Akitoshi Walston, Shannon Leigh Matsumoto, Toshimasa Izumi, Nobuhiro Tsukioka, Takuma Komatsu, Hiroaki Inoue, Hidetoshi Kabata, Daijiro Nishiyama, Noritoshi Miki, Yukio |
author_facet | Ueda, Daiju Yamamoto, Akira Shimazaki, Akitoshi Walston, Shannon Leigh Matsumoto, Toshimasa Izumi, Nobuhiro Tsukioka, Takuma Komatsu, Hiroaki Inoue, Hidetoshi Kabata, Daijiro Nishiyama, Noritoshi Miki, Yukio |
author_sort | Ueda, Daiju |
collection | PubMed |
description | BACKGROUND: We investigated the performance improvement of physicians with varying levels of chest radiology experience when using a commercially available artificial intelligence (AI)-based computer-assisted detection (CAD) software to detect lung cancer nodules on chest radiographs from multiple vendors. METHODS: Chest radiographs and their corresponding chest CT were retrospectively collected from one institution between July 2017 and June 2018. Two author radiologists annotated pathologically proven lung cancer nodules on the chest radiographs while referencing CT. Eighteen readers (nine general physicians and nine radiologists) from nine institutions interpreted the chest radiographs. The readers interpreted the radiographs alone and then reinterpreted them referencing the CAD output. Suspected nodules were enclosed with a bounding box. These bounding boxes were judged correct if there was significant overlap with the ground truth, specifically, if the intersection over union was 0.3 or higher. The sensitivity, specificity, accuracy, PPV, and NPV of the readers’ assessments were calculated. RESULTS: In total, 312 chest radiographs were collected as a test dataset, including 59 malignant images (59 nodules of lung cancer) and 253 normal images. The model provided a modest boost to the reader’s sensitivity, particularly helping general physicians. The performance of general physicians was improved from 0.47 to 0.60 for sensitivity, from 0.96 to 0.97 for specificity, from 0.87 to 0.90 for accuracy, from 0.75 to 0.82 for PPV, and from 0.89 to 0.91 for NPV while the performance of radiologists was improved from 0.51 to 0.60 for sensitivity, from 0.96 to 0.96 for specificity, from 0.87 to 0.90 for accuracy, from 0.76 to 0.80 for PPV, and from 0.89 to 0.91 for NPV. The overall increase in the ratios of sensitivity, specificity, accuracy, PPV, and NPV were 1.22 (1.14–1.30), 1.00 (1.00–1.01), 1.03 (1.02–1.04), 1.07 (1.03–1.11), and 1.02 (1.01–1.03) by using the CAD, respectively. CONCLUSION: The AI-based CAD was able to improve the ability of physicians to detect nodules of lung cancer in chest radiographs. The use of a CAD model can indicate regions physicians may have overlooked during their initial assessment. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12885-021-08847-9. |
format | Online Article Text |
id | pubmed-8524996 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-85249962021-10-22 Artificial intelligence-supported lung cancer detection by multi-institutional readers with multi-vendor chest radiographs: a retrospective clinical validation study Ueda, Daiju Yamamoto, Akira Shimazaki, Akitoshi Walston, Shannon Leigh Matsumoto, Toshimasa Izumi, Nobuhiro Tsukioka, Takuma Komatsu, Hiroaki Inoue, Hidetoshi Kabata, Daijiro Nishiyama, Noritoshi Miki, Yukio BMC Cancer Research Article BACKGROUND: We investigated the performance improvement of physicians with varying levels of chest radiology experience when using a commercially available artificial intelligence (AI)-based computer-assisted detection (CAD) software to detect lung cancer nodules on chest radiographs from multiple vendors. METHODS: Chest radiographs and their corresponding chest CT were retrospectively collected from one institution between July 2017 and June 2018. Two author radiologists annotated pathologically proven lung cancer nodules on the chest radiographs while referencing CT. Eighteen readers (nine general physicians and nine radiologists) from nine institutions interpreted the chest radiographs. The readers interpreted the radiographs alone and then reinterpreted them referencing the CAD output. Suspected nodules were enclosed with a bounding box. These bounding boxes were judged correct if there was significant overlap with the ground truth, specifically, if the intersection over union was 0.3 or higher. The sensitivity, specificity, accuracy, PPV, and NPV of the readers’ assessments were calculated. RESULTS: In total, 312 chest radiographs were collected as a test dataset, including 59 malignant images (59 nodules of lung cancer) and 253 normal images. The model provided a modest boost to the reader’s sensitivity, particularly helping general physicians. The performance of general physicians was improved from 0.47 to 0.60 for sensitivity, from 0.96 to 0.97 for specificity, from 0.87 to 0.90 for accuracy, from 0.75 to 0.82 for PPV, and from 0.89 to 0.91 for NPV while the performance of radiologists was improved from 0.51 to 0.60 for sensitivity, from 0.96 to 0.96 for specificity, from 0.87 to 0.90 for accuracy, from 0.76 to 0.80 for PPV, and from 0.89 to 0.91 for NPV. The overall increase in the ratios of sensitivity, specificity, accuracy, PPV, and NPV were 1.22 (1.14–1.30), 1.00 (1.00–1.01), 1.03 (1.02–1.04), 1.07 (1.03–1.11), and 1.02 (1.01–1.03) by using the CAD, respectively. CONCLUSION: The AI-based CAD was able to improve the ability of physicians to detect nodules of lung cancer in chest radiographs. The use of a CAD model can indicate regions physicians may have overlooked during their initial assessment. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12885-021-08847-9. BioMed Central 2021-10-18 /pmc/articles/PMC8524996/ /pubmed/34663260 http://dx.doi.org/10.1186/s12885-021-08847-9 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Article Ueda, Daiju Yamamoto, Akira Shimazaki, Akitoshi Walston, Shannon Leigh Matsumoto, Toshimasa Izumi, Nobuhiro Tsukioka, Takuma Komatsu, Hiroaki Inoue, Hidetoshi Kabata, Daijiro Nishiyama, Noritoshi Miki, Yukio Artificial intelligence-supported lung cancer detection by multi-institutional readers with multi-vendor chest radiographs: a retrospective clinical validation study |
title | Artificial intelligence-supported lung cancer detection by multi-institutional readers with multi-vendor chest radiographs: a retrospective clinical validation study |
title_full | Artificial intelligence-supported lung cancer detection by multi-institutional readers with multi-vendor chest radiographs: a retrospective clinical validation study |
title_fullStr | Artificial intelligence-supported lung cancer detection by multi-institutional readers with multi-vendor chest radiographs: a retrospective clinical validation study |
title_full_unstemmed | Artificial intelligence-supported lung cancer detection by multi-institutional readers with multi-vendor chest radiographs: a retrospective clinical validation study |
title_short | Artificial intelligence-supported lung cancer detection by multi-institutional readers with multi-vendor chest radiographs: a retrospective clinical validation study |
title_sort | artificial intelligence-supported lung cancer detection by multi-institutional readers with multi-vendor chest radiographs: a retrospective clinical validation study |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8524996/ https://www.ncbi.nlm.nih.gov/pubmed/34663260 http://dx.doi.org/10.1186/s12885-021-08847-9 |
work_keys_str_mv | AT uedadaiju artificialintelligencesupportedlungcancerdetectionbymultiinstitutionalreaderswithmultivendorchestradiographsaretrospectiveclinicalvalidationstudy AT yamamotoakira artificialintelligencesupportedlungcancerdetectionbymultiinstitutionalreaderswithmultivendorchestradiographsaretrospectiveclinicalvalidationstudy AT shimazakiakitoshi artificialintelligencesupportedlungcancerdetectionbymultiinstitutionalreaderswithmultivendorchestradiographsaretrospectiveclinicalvalidationstudy AT walstonshannonleigh artificialintelligencesupportedlungcancerdetectionbymultiinstitutionalreaderswithmultivendorchestradiographsaretrospectiveclinicalvalidationstudy AT matsumototoshimasa artificialintelligencesupportedlungcancerdetectionbymultiinstitutionalreaderswithmultivendorchestradiographsaretrospectiveclinicalvalidationstudy AT izuminobuhiro artificialintelligencesupportedlungcancerdetectionbymultiinstitutionalreaderswithmultivendorchestradiographsaretrospectiveclinicalvalidationstudy AT tsukiokatakuma artificialintelligencesupportedlungcancerdetectionbymultiinstitutionalreaderswithmultivendorchestradiographsaretrospectiveclinicalvalidationstudy AT komatsuhiroaki artificialintelligencesupportedlungcancerdetectionbymultiinstitutionalreaderswithmultivendorchestradiographsaretrospectiveclinicalvalidationstudy AT inouehidetoshi artificialintelligencesupportedlungcancerdetectionbymultiinstitutionalreaderswithmultivendorchestradiographsaretrospectiveclinicalvalidationstudy AT kabatadaijiro artificialintelligencesupportedlungcancerdetectionbymultiinstitutionalreaderswithmultivendorchestradiographsaretrospectiveclinicalvalidationstudy AT nishiyamanoritoshi artificialintelligencesupportedlungcancerdetectionbymultiinstitutionalreaderswithmultivendorchestradiographsaretrospectiveclinicalvalidationstudy AT mikiyukio artificialintelligencesupportedlungcancerdetectionbymultiinstitutionalreaderswithmultivendorchestradiographsaretrospectiveclinicalvalidationstudy |