Cargando…

Evaluation of generalization ability for deep learning‐based auto‐segmentation accuracy in limited field of view CBCT of male pelvic region

PURPOSE: The aim of this study was to evaluate generalization ability of segmentation accuracy for limited FOV CBCT in the male pelvic region using a full‐image CNN. Auto‐segmentation accuracy was evaluated using various datasets with different intensity distributions and FOV sizes. METHODS: A total...

Descripción completa

Detalles Bibliográficos
Autores principales: Hirashima, Hideaki, Nakamura, Mitsuhiro, Imanishi, Keiho, Nakao, Megumi, Mizowaki, Takashi
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10161011/
https://www.ncbi.nlm.nih.gov/pubmed/36659871
http://dx.doi.org/10.1002/acm2.13912
_version_ 1785037400088510464
author Hirashima, Hideaki
Nakamura, Mitsuhiro
Imanishi, Keiho
Nakao, Megumi
Mizowaki, Takashi
author_facet Hirashima, Hideaki
Nakamura, Mitsuhiro
Imanishi, Keiho
Nakao, Megumi
Mizowaki, Takashi
author_sort Hirashima, Hideaki
collection PubMed
description PURPOSE: The aim of this study was to evaluate generalization ability of segmentation accuracy for limited FOV CBCT in the male pelvic region using a full‐image CNN. Auto‐segmentation accuracy was evaluated using various datasets with different intensity distributions and FOV sizes. METHODS: A total of 171 CBCT datasets from patients with prostate cancer were enrolled. There were 151, 10, and 10 CBCT datasets acquired from Vero4DRT, TrueBeam STx, and Clinac‐iX, respectively. The FOV for Vero4DRT, TrueBeam STx, and Clinac‐iX was 20, 26, and 25 cm, respectively. The ROIs, including the bladder, prostate, rectum, and seminal vesicles, were manually delineated. The U(2)‐Net CNN network architecture was used to train the segmentation model. A total of 131 limited FOV CBCT datasets from Vero4DRT were used for training (104 datasets) and validation (27 datasets); thereafter the rest were for testing. The training routine was set to save the best weight values when the DSC in the validation set was maximized. Segmentation accuracy was qualitatively and quantitatively evaluated between the ground truth and predicted ROIs in the different testing datasets. RESULTS: The mean scores ± standard deviation of visual evaluation for bladder, prostate, rectum, and seminal vesicle in all treatment machines were 1.0 ± 0.7, 1.5 ± 0.6, 1.4 ± 0.6, and 2.1 ± 0.8 points, respectively. The median DSC values for all imaging devices were ≥0.94 for the bladder, 0.84–0.87 for the prostate and rectum, and 0.48–0.69 for the seminal vesicles. Although the DSC values for the bladder and seminal vesicles were significantly different among the three imaging devices, the DSC value of the bladder changed by less than 1% point. The median MSD values for all imaging devices were ≤1.2 mm for the bladder and 1.4–2.2 mm for the prostate, rectum, and seminal vesicles. The MSD values for the seminal vesicles were significantly different between the three imaging devices. CONCLUSION: The proposed method is effective for testing datasets with different intensity distributions and FOV from training datasets.
format Online
Article
Text
id pubmed-10161011
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher John Wiley and Sons Inc.
record_format MEDLINE/PubMed
spelling pubmed-101610112023-05-06 Evaluation of generalization ability for deep learning‐based auto‐segmentation accuracy in limited field of view CBCT of male pelvic region Hirashima, Hideaki Nakamura, Mitsuhiro Imanishi, Keiho Nakao, Megumi Mizowaki, Takashi J Appl Clin Med Phys Radiation Oncology Physics PURPOSE: The aim of this study was to evaluate generalization ability of segmentation accuracy for limited FOV CBCT in the male pelvic region using a full‐image CNN. Auto‐segmentation accuracy was evaluated using various datasets with different intensity distributions and FOV sizes. METHODS: A total of 171 CBCT datasets from patients with prostate cancer were enrolled. There were 151, 10, and 10 CBCT datasets acquired from Vero4DRT, TrueBeam STx, and Clinac‐iX, respectively. The FOV for Vero4DRT, TrueBeam STx, and Clinac‐iX was 20, 26, and 25 cm, respectively. The ROIs, including the bladder, prostate, rectum, and seminal vesicles, were manually delineated. The U(2)‐Net CNN network architecture was used to train the segmentation model. A total of 131 limited FOV CBCT datasets from Vero4DRT were used for training (104 datasets) and validation (27 datasets); thereafter the rest were for testing. The training routine was set to save the best weight values when the DSC in the validation set was maximized. Segmentation accuracy was qualitatively and quantitatively evaluated between the ground truth and predicted ROIs in the different testing datasets. RESULTS: The mean scores ± standard deviation of visual evaluation for bladder, prostate, rectum, and seminal vesicle in all treatment machines were 1.0 ± 0.7, 1.5 ± 0.6, 1.4 ± 0.6, and 2.1 ± 0.8 points, respectively. The median DSC values for all imaging devices were ≥0.94 for the bladder, 0.84–0.87 for the prostate and rectum, and 0.48–0.69 for the seminal vesicles. Although the DSC values for the bladder and seminal vesicles were significantly different among the three imaging devices, the DSC value of the bladder changed by less than 1% point. The median MSD values for all imaging devices were ≤1.2 mm for the bladder and 1.4–2.2 mm for the prostate, rectum, and seminal vesicles. The MSD values for the seminal vesicles were significantly different between the three imaging devices. CONCLUSION: The proposed method is effective for testing datasets with different intensity distributions and FOV from training datasets. John Wiley and Sons Inc. 2023-01-19 /pmc/articles/PMC10161011/ /pubmed/36659871 http://dx.doi.org/10.1002/acm2.13912 Text en © 2023 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, LLC on behalf of The American Association of Physicists in Medicine. https://creativecommons.org/licenses/by/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
spellingShingle Radiation Oncology Physics
Hirashima, Hideaki
Nakamura, Mitsuhiro
Imanishi, Keiho
Nakao, Megumi
Mizowaki, Takashi
Evaluation of generalization ability for deep learning‐based auto‐segmentation accuracy in limited field of view CBCT of male pelvic region
title Evaluation of generalization ability for deep learning‐based auto‐segmentation accuracy in limited field of view CBCT of male pelvic region
title_full Evaluation of generalization ability for deep learning‐based auto‐segmentation accuracy in limited field of view CBCT of male pelvic region
title_fullStr Evaluation of generalization ability for deep learning‐based auto‐segmentation accuracy in limited field of view CBCT of male pelvic region
title_full_unstemmed Evaluation of generalization ability for deep learning‐based auto‐segmentation accuracy in limited field of view CBCT of male pelvic region
title_short Evaluation of generalization ability for deep learning‐based auto‐segmentation accuracy in limited field of view CBCT of male pelvic region
title_sort evaluation of generalization ability for deep learning‐based auto‐segmentation accuracy in limited field of view cbct of male pelvic region
topic Radiation Oncology Physics
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10161011/
https://www.ncbi.nlm.nih.gov/pubmed/36659871
http://dx.doi.org/10.1002/acm2.13912
work_keys_str_mv AT hirashimahideaki evaluationofgeneralizationabilityfordeeplearningbasedautosegmentationaccuracyinlimitedfieldofviewcbctofmalepelvicregion
AT nakamuramitsuhiro evaluationofgeneralizationabilityfordeeplearningbasedautosegmentationaccuracyinlimitedfieldofviewcbctofmalepelvicregion
AT imanishikeiho evaluationofgeneralizationabilityfordeeplearningbasedautosegmentationaccuracyinlimitedfieldofviewcbctofmalepelvicregion
AT nakaomegumi evaluationofgeneralizationabilityfordeeplearningbasedautosegmentationaccuracyinlimitedfieldofviewcbctofmalepelvicregion
AT mizowakitakashi evaluationofgeneralizationabilityfordeeplearningbasedautosegmentationaccuracyinlimitedfieldofviewcbctofmalepelvicregion