Cargando…

Explainability and controllability of patient‐specific deep learning with attention‐based augmentation for markerless image‐guided radiotherapy

BACKGROUND: We reported the concept of patient‐specific deep learning (DL) for real‐time markerless tumor segmentation in image‐guided radiotherapy (IGRT). The method was aimed to control the attention of convolutional neural networks (CNNs) by artificial differences in co‐occurrence probability (Co...

Descripción completa

Detalles Bibliográficos
Autores principales: Terunuma, Toshiyuki, Sakae, Takeji, Hu, Yachao, Takei, Hideyuki, Moriya, Shunsuke, Okumura, Toshiyuki, Sakurai, Hideyuki
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10100026/
https://www.ncbi.nlm.nih.gov/pubmed/36354286
http://dx.doi.org/10.1002/mp.16095
_version_ 1785025187173892096
author Terunuma, Toshiyuki
Sakae, Takeji
Hu, Yachao
Takei, Hideyuki
Moriya, Shunsuke
Okumura, Toshiyuki
Sakurai, Hideyuki
author_facet Terunuma, Toshiyuki
Sakae, Takeji
Hu, Yachao
Takei, Hideyuki
Moriya, Shunsuke
Okumura, Toshiyuki
Sakurai, Hideyuki
author_sort Terunuma, Toshiyuki
collection PubMed
description BACKGROUND: We reported the concept of patient‐specific deep learning (DL) for real‐time markerless tumor segmentation in image‐guided radiotherapy (IGRT). The method was aimed to control the attention of convolutional neural networks (CNNs) by artificial differences in co‐occurrence probability (CoOCP) in training datasets, that is, focusing CNN attention on soft tissues while ignoring bones. However, the effectiveness of this attention‐based data augmentation has not been confirmed by explainable techniques. Furthermore, compared to reasonable ground truths, the feasibility of tumor segmentation in clinical kilovolt (kV) X‐ray fluoroscopic (XF) images has not been confirmed. PURPOSE: The first aim of this paper was to present evidence that the proposed method provides an explanation and control of DL behavior. The second purpose was to validate the real‐time lung tumor segmentation in clinical kV XF images for IGRT. METHODS: This retrospective study included 10 patients with lung cancer. Patient‐specific and XF angle‐specific image pairs comprising digitally reconstructed radiographs (DRRs) and projected‐clinical‐target‐volume (pCTV) images were calculated from four‐dimensional computer tomographic data and treatment planning information. The training datasets were primarily augmented by random overlay (RO) and noise injection (NI): RO aims to differentiate positional CoOCP in soft tissues and bones, and NI aims to make a difference in the frequency of occurrence of local and global image features. The CNNs for each patient‐and‐angle were automatically optimized in the DL training stage to transform the training DRRs into pCTV images. In the inference stage, the trained CNNs transformed the test XF images into pCTV images, thus identifying target positions and shapes. RESULTS: The visual analysis of DL attention heatmaps for a test image demonstrated that our method focused CNN attention on soft tissue and global image features rather than bones and local features. The processing time for each patient‐and‐angle‐specific dataset in the training stage was ∼30 min, whereas that in the inference stage was 8 ms/frame. The estimated three‐dimensional 95 percentile tracking error, Jaccard index, and Hausdorff distance for 10 patients were 1.3–3.9 mm, 0.85–0.94, and 0.6–4.9 mm, respectively. CONCLUSIONS: The proposed attention‐based data augmentation with both RO and NI made the CNN behavior more explainable and more controllable. The results obtained demonstrated the feasibility of real‐time markerless lung tumor segmentation in kV XF images for IGRT.
format Online
Article
Text
id pubmed-10100026
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher John Wiley and Sons Inc.
record_format MEDLINE/PubMed
spelling pubmed-101000262023-04-14 Explainability and controllability of patient‐specific deep learning with attention‐based augmentation for markerless image‐guided radiotherapy Terunuma, Toshiyuki Sakae, Takeji Hu, Yachao Takei, Hideyuki Moriya, Shunsuke Okumura, Toshiyuki Sakurai, Hideyuki Med Phys EMERGING IMAGING AND THERAPY MODALITIES BACKGROUND: We reported the concept of patient‐specific deep learning (DL) for real‐time markerless tumor segmentation in image‐guided radiotherapy (IGRT). The method was aimed to control the attention of convolutional neural networks (CNNs) by artificial differences in co‐occurrence probability (CoOCP) in training datasets, that is, focusing CNN attention on soft tissues while ignoring bones. However, the effectiveness of this attention‐based data augmentation has not been confirmed by explainable techniques. Furthermore, compared to reasonable ground truths, the feasibility of tumor segmentation in clinical kilovolt (kV) X‐ray fluoroscopic (XF) images has not been confirmed. PURPOSE: The first aim of this paper was to present evidence that the proposed method provides an explanation and control of DL behavior. The second purpose was to validate the real‐time lung tumor segmentation in clinical kV XF images for IGRT. METHODS: This retrospective study included 10 patients with lung cancer. Patient‐specific and XF angle‐specific image pairs comprising digitally reconstructed radiographs (DRRs) and projected‐clinical‐target‐volume (pCTV) images were calculated from four‐dimensional computer tomographic data and treatment planning information. The training datasets were primarily augmented by random overlay (RO) and noise injection (NI): RO aims to differentiate positional CoOCP in soft tissues and bones, and NI aims to make a difference in the frequency of occurrence of local and global image features. The CNNs for each patient‐and‐angle were automatically optimized in the DL training stage to transform the training DRRs into pCTV images. In the inference stage, the trained CNNs transformed the test XF images into pCTV images, thus identifying target positions and shapes. RESULTS: The visual analysis of DL attention heatmaps for a test image demonstrated that our method focused CNN attention on soft tissue and global image features rather than bones and local features. The processing time for each patient‐and‐angle‐specific dataset in the training stage was ∼30 min, whereas that in the inference stage was 8 ms/frame. The estimated three‐dimensional 95 percentile tracking error, Jaccard index, and Hausdorff distance for 10 patients were 1.3–3.9 mm, 0.85–0.94, and 0.6–4.9 mm, respectively. CONCLUSIONS: The proposed attention‐based data augmentation with both RO and NI made the CNN behavior more explainable and more controllable. The results obtained demonstrated the feasibility of real‐time markerless lung tumor segmentation in kV XF images for IGRT. John Wiley and Sons Inc. 2022-11-24 2023-01 /pmc/articles/PMC10100026/ /pubmed/36354286 http://dx.doi.org/10.1002/mp.16095 Text en © 2022 The Authors. Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine. https://creativecommons.org/licenses/by-nc/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by-nc/4.0/ (https://creativecommons.org/licenses/by-nc/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.
spellingShingle EMERGING IMAGING AND THERAPY MODALITIES
Terunuma, Toshiyuki
Sakae, Takeji
Hu, Yachao
Takei, Hideyuki
Moriya, Shunsuke
Okumura, Toshiyuki
Sakurai, Hideyuki
Explainability and controllability of patient‐specific deep learning with attention‐based augmentation for markerless image‐guided radiotherapy
title Explainability and controllability of patient‐specific deep learning with attention‐based augmentation for markerless image‐guided radiotherapy
title_full Explainability and controllability of patient‐specific deep learning with attention‐based augmentation for markerless image‐guided radiotherapy
title_fullStr Explainability and controllability of patient‐specific deep learning with attention‐based augmentation for markerless image‐guided radiotherapy
title_full_unstemmed Explainability and controllability of patient‐specific deep learning with attention‐based augmentation for markerless image‐guided radiotherapy
title_short Explainability and controllability of patient‐specific deep learning with attention‐based augmentation for markerless image‐guided radiotherapy
title_sort explainability and controllability of patient‐specific deep learning with attention‐based augmentation for markerless image‐guided radiotherapy
topic EMERGING IMAGING AND THERAPY MODALITIES
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10100026/
https://www.ncbi.nlm.nih.gov/pubmed/36354286
http://dx.doi.org/10.1002/mp.16095
work_keys_str_mv AT terunumatoshiyuki explainabilityandcontrollabilityofpatientspecificdeeplearningwithattentionbasedaugmentationformarkerlessimageguidedradiotherapy
AT sakaetakeji explainabilityandcontrollabilityofpatientspecificdeeplearningwithattentionbasedaugmentationformarkerlessimageguidedradiotherapy
AT huyachao explainabilityandcontrollabilityofpatientspecificdeeplearningwithattentionbasedaugmentationformarkerlessimageguidedradiotherapy
AT takeihideyuki explainabilityandcontrollabilityofpatientspecificdeeplearningwithattentionbasedaugmentationformarkerlessimageguidedradiotherapy
AT moriyashunsuke explainabilityandcontrollabilityofpatientspecificdeeplearningwithattentionbasedaugmentationformarkerlessimageguidedradiotherapy
AT okumuratoshiyuki explainabilityandcontrollabilityofpatientspecificdeeplearningwithattentionbasedaugmentationformarkerlessimageguidedradiotherapy
AT sakuraihideyuki explainabilityandcontrollabilityofpatientspecificdeeplearningwithattentionbasedaugmentationformarkerlessimageguidedradiotherapy