Cargando…

Clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer: an observational study

BACKGROUND: Artificial intelligence (AI) and deep learning have shown great potential in streamlining clinical tasks. However, most studies remain confined to in silico validation in small internal cohorts, without external validation or data on real-world clinical utility. We developed a strategy f...

Descripción completa

Detalles Bibliográficos
Autores principales: Hosny, Ahmed, Bitterman, Danielle S, Guthier, Christian V, Qian, Jack M, Roberts, Hannah, Perni, Subha, Saraf, Anurag, Peng, Luke C, Pashtan, Itai, Ye, Zezhong, Kann, Benjamin H, Kozono, David E, Christiani, David, Catalano, Paul J, Aerts, Hugo J W L, Mak, Raymond H
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9435511/
https://www.ncbi.nlm.nih.gov/pubmed/36028289
http://dx.doi.org/10.1016/S2589-7500(22)00129-7
_version_ 1784781158057246720
author Hosny, Ahmed
Bitterman, Danielle S
Guthier, Christian V
Qian, Jack M
Roberts, Hannah
Perni, Subha
Saraf, Anurag
Peng, Luke C
Pashtan, Itai
Ye, Zezhong
Kann, Benjamin H
Kozono, David E
Christiani, David
Catalano, Paul J
Aerts, Hugo J W L
Mak, Raymond H
author_facet Hosny, Ahmed
Bitterman, Danielle S
Guthier, Christian V
Qian, Jack M
Roberts, Hannah
Perni, Subha
Saraf, Anurag
Peng, Luke C
Pashtan, Itai
Ye, Zezhong
Kann, Benjamin H
Kozono, David E
Christiani, David
Catalano, Paul J
Aerts, Hugo J W L
Mak, Raymond H
author_sort Hosny, Ahmed
collection PubMed
description BACKGROUND: Artificial intelligence (AI) and deep learning have shown great potential in streamlining clinical tasks. However, most studies remain confined to in silico validation in small internal cohorts, without external validation or data on real-world clinical utility. We developed a strategy for the clinical validation of deep learning models for segmenting primary non-small-cell lung cancer (NSCLC) tumours and involved lymph nodes in CT images, which is a time-intensive step in radiation treatment planning, with large variability among experts. METHODS: In this observational study, CT images and segmentations were collected from eight internal and external sources from the USA, the Netherlands, Canada, and China, with patients from the Maastro and Harvard-RT1 datasets used for model discovery (segmented by a single expert). Validation consisted of interobserver and intraobserver benchmarking, primary validation, functional validation, and end-user testing on the following datasets: multi-delineation, Harvard-RT1, Harvard-RT2, RTOG-0617, NSCLC-radiogenomics, Lung-PET-CT-Dx, RIDER, and thorax phantom. Primary validation consisted of stepwise testing on increasingly external datasets using measures of overlap including volumetric dice (VD) and surface dice (SD). Functional validation explored dosimetric effect, model failure modes, test-retest stability, and accuracy. End-user testing with eight experts assessed automated segmentations in a simulated clinical setting. FINDINGS: We included 2208 patients imaged between 2001 and 2015, with 787 patients used for model discovery and 1421 for model validation, including 28 patients for end-user testing. Models showed an improvement over the interobserver benchmark (multi-delineation dataset; VD 0·91 [IQR 0·83–0·92], p=0·0062; SD 0·86 [0·71–0·91], p=0·0005), and were within the intraobserver benchmark. For primary validation, AI performance on internal Harvard-RT1 data (segmented by the same expert who segmented the discovery data) was VD 0·83 (IQR 0·76–0·88) and SD 0·79 (0·68–0·88), within the interobserver benchmark. Performance on internal Harvard-RT2 data segmented by other experts was VD 0·70 (0·56–0·80) and SD 0·50 (0·34–0·71). Performance on RTOG-0617 clinical trial data was VD 0·71 (0·60–0·81) and SD 0·47 (0·35–0·59), with similar results on diagnostic radiology datasets NSCLC-radiogenomics and Lung-PET-CT-Dx. Despite these geometric overlap results, models yielded target volumes with equivalent radiation dose coverage to those of experts. We also found non-significant differences between de novo expert and AI-assisted segmentations. AI assistance led to a 65% reduction in segmentation time (5·4 min; p<0·0001) and a 32% reduction in interobserver variability (SD; p=0·013). INTERPRETATION: We present a clinical validation strategy for AI models. We found that in silico geometric segmentation metrics might not correlate with clinical utility of the models. Experts’ segmentation style and preference might affect model performance.
format Online
Article
Text
id pubmed-9435511
institution National Center for Biotechnology Information
language English
publishDate 2022
record_format MEDLINE/PubMed
spelling pubmed-94355112022-09-01 Clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer: an observational study Hosny, Ahmed Bitterman, Danielle S Guthier, Christian V Qian, Jack M Roberts, Hannah Perni, Subha Saraf, Anurag Peng, Luke C Pashtan, Itai Ye, Zezhong Kann, Benjamin H Kozono, David E Christiani, David Catalano, Paul J Aerts, Hugo J W L Mak, Raymond H Lancet Digit Health Article BACKGROUND: Artificial intelligence (AI) and deep learning have shown great potential in streamlining clinical tasks. However, most studies remain confined to in silico validation in small internal cohorts, without external validation or data on real-world clinical utility. We developed a strategy for the clinical validation of deep learning models for segmenting primary non-small-cell lung cancer (NSCLC) tumours and involved lymph nodes in CT images, which is a time-intensive step in radiation treatment planning, with large variability among experts. METHODS: In this observational study, CT images and segmentations were collected from eight internal and external sources from the USA, the Netherlands, Canada, and China, with patients from the Maastro and Harvard-RT1 datasets used for model discovery (segmented by a single expert). Validation consisted of interobserver and intraobserver benchmarking, primary validation, functional validation, and end-user testing on the following datasets: multi-delineation, Harvard-RT1, Harvard-RT2, RTOG-0617, NSCLC-radiogenomics, Lung-PET-CT-Dx, RIDER, and thorax phantom. Primary validation consisted of stepwise testing on increasingly external datasets using measures of overlap including volumetric dice (VD) and surface dice (SD). Functional validation explored dosimetric effect, model failure modes, test-retest stability, and accuracy. End-user testing with eight experts assessed automated segmentations in a simulated clinical setting. FINDINGS: We included 2208 patients imaged between 2001 and 2015, with 787 patients used for model discovery and 1421 for model validation, including 28 patients for end-user testing. Models showed an improvement over the interobserver benchmark (multi-delineation dataset; VD 0·91 [IQR 0·83–0·92], p=0·0062; SD 0·86 [0·71–0·91], p=0·0005), and were within the intraobserver benchmark. For primary validation, AI performance on internal Harvard-RT1 data (segmented by the same expert who segmented the discovery data) was VD 0·83 (IQR 0·76–0·88) and SD 0·79 (0·68–0·88), within the interobserver benchmark. Performance on internal Harvard-RT2 data segmented by other experts was VD 0·70 (0·56–0·80) and SD 0·50 (0·34–0·71). Performance on RTOG-0617 clinical trial data was VD 0·71 (0·60–0·81) and SD 0·47 (0·35–0·59), with similar results on diagnostic radiology datasets NSCLC-radiogenomics and Lung-PET-CT-Dx. Despite these geometric overlap results, models yielded target volumes with equivalent radiation dose coverage to those of experts. We also found non-significant differences between de novo expert and AI-assisted segmentations. AI assistance led to a 65% reduction in segmentation time (5·4 min; p<0·0001) and a 32% reduction in interobserver variability (SD; p=0·013). INTERPRETATION: We present a clinical validation strategy for AI models. We found that in silico geometric segmentation metrics might not correlate with clinical utility of the models. Experts’ segmentation style and preference might affect model performance. 2022-09 /pmc/articles/PMC9435511/ /pubmed/36028289 http://dx.doi.org/10.1016/S2589-7500(22)00129-7 Text en https://creativecommons.org/licenses/by/4.0/This is an Open Access article under the CC BY-NC-ND 4.0 license.
spellingShingle Article
Hosny, Ahmed
Bitterman, Danielle S
Guthier, Christian V
Qian, Jack M
Roberts, Hannah
Perni, Subha
Saraf, Anurag
Peng, Luke C
Pashtan, Itai
Ye, Zezhong
Kann, Benjamin H
Kozono, David E
Christiani, David
Catalano, Paul J
Aerts, Hugo J W L
Mak, Raymond H
Clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer: an observational study
title Clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer: an observational study
title_full Clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer: an observational study
title_fullStr Clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer: an observational study
title_full_unstemmed Clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer: an observational study
title_short Clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer: an observational study
title_sort clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer: an observational study
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9435511/
https://www.ncbi.nlm.nih.gov/pubmed/36028289
http://dx.doi.org/10.1016/S2589-7500(22)00129-7
work_keys_str_mv AT hosnyahmed clinicalvalidationofdeeplearningalgorithmsforradiotherapytargetingofnonsmallcelllungcanceranobservationalstudy
AT bittermandanielles clinicalvalidationofdeeplearningalgorithmsforradiotherapytargetingofnonsmallcelllungcanceranobservationalstudy
AT guthierchristianv clinicalvalidationofdeeplearningalgorithmsforradiotherapytargetingofnonsmallcelllungcanceranobservationalstudy
AT qianjackm clinicalvalidationofdeeplearningalgorithmsforradiotherapytargetingofnonsmallcelllungcanceranobservationalstudy
AT robertshannah clinicalvalidationofdeeplearningalgorithmsforradiotherapytargetingofnonsmallcelllungcanceranobservationalstudy
AT pernisubha clinicalvalidationofdeeplearningalgorithmsforradiotherapytargetingofnonsmallcelllungcanceranobservationalstudy
AT sarafanurag clinicalvalidationofdeeplearningalgorithmsforradiotherapytargetingofnonsmallcelllungcanceranobservationalstudy
AT penglukec clinicalvalidationofdeeplearningalgorithmsforradiotherapytargetingofnonsmallcelllungcanceranobservationalstudy
AT pashtanitai clinicalvalidationofdeeplearningalgorithmsforradiotherapytargetingofnonsmallcelllungcanceranobservationalstudy
AT yezezhong clinicalvalidationofdeeplearningalgorithmsforradiotherapytargetingofnonsmallcelllungcanceranobservationalstudy
AT kannbenjaminh clinicalvalidationofdeeplearningalgorithmsforradiotherapytargetingofnonsmallcelllungcanceranobservationalstudy
AT kozonodavide clinicalvalidationofdeeplearningalgorithmsforradiotherapytargetingofnonsmallcelllungcanceranobservationalstudy
AT christianidavid clinicalvalidationofdeeplearningalgorithmsforradiotherapytargetingofnonsmallcelllungcanceranobservationalstudy
AT catalanopaulj clinicalvalidationofdeeplearningalgorithmsforradiotherapytargetingofnonsmallcelllungcanceranobservationalstudy
AT aertshugojwl clinicalvalidationofdeeplearningalgorithmsforradiotherapytargetingofnonsmallcelllungcanceranobservationalstudy
AT makraymondh clinicalvalidationofdeeplearningalgorithmsforradiotherapytargetingofnonsmallcelllungcanceranobservationalstudy