Cargando…
Generating synthesized computed tomography from CBCT using a conditional generative adversarial network for head and neck cancer patients
Purpose: To overcome the imaging artifacts and Hounsfield unit inaccuracy limitations of cone-beam computed tomography, a conditional generative adversarial network is proposed to synthesize high-quality computed tomography-like images from cone-beam computed tomography images. Methods: A total of 1...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
SAGE Publications
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8918752/ https://www.ncbi.nlm.nih.gov/pubmed/35262422 http://dx.doi.org/10.1177/15330338221085358 |
_version_ | 1784668798766284800 |
---|---|
author | Zhang, Yun Ding, Sheng-gou Gong, Xiao-chang Yuan, Xing-xing Lin, Jia-fan Chen, Qi Li, Jin-gao |
author_facet | Zhang, Yun Ding, Sheng-gou Gong, Xiao-chang Yuan, Xing-xing Lin, Jia-fan Chen, Qi Li, Jin-gao |
author_sort | Zhang, Yun |
collection | PubMed |
description | Purpose: To overcome the imaging artifacts and Hounsfield unit inaccuracy limitations of cone-beam computed tomography, a conditional generative adversarial network is proposed to synthesize high-quality computed tomography-like images from cone-beam computed tomography images. Methods: A total of 120 paired cone-beam computed tomography and computed tomography scans of patients with head and neck cancer who were treated during January 2019 and December 2020 retrospectively collected; the scans of 90 patients were assembled into training and validation datasets, and the scans of 30 patients were used in testing datasets. The proposed method integrates a U-Net backbone architecture with residual blocks into a conditional generative adversarial network framework to learn a mapping from cone-beam computed tomography images to pair planning computed tomography images. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio were used to assess the performance of this method compared with U-Net and CycleGAN. Results: The synthesized computed tomography images produced by the conditional generative adversarial network were visually similar to planning computed tomography images. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio calculated from test images generated by conditional generative adversarial network were all significantly different than CycleGAN and U-Net. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio values between the synthesized computed tomography and the reference computed tomography were 16.75 ± 11.07 Hounsfield unit, 58.15 ± 28.64 Hounsfield unit, 0.92 ± 0.04, and 30.58 ± 3.86 dB in conditional generative adversarial network, 20.66 ± 12.15 Hounsfield unit, 66.53 ± 29.73 Hounsfield unit, 0.90 ± 0.05, and 29.29 ± 3.49 dB in CycleGAN, and 16.82 ± 10.99 Hounsfield unit, 58.68 ± 28.34 Hounsfield unit, 0.92 ± 0.04, and 30.48 ± 3.83 dB in U-Net, respectively. Conclusions: The synthesized computed tomography generated from the cone-beam computed tomography-based conditional generative adversarial network method has accurate computed tomography numbers while keeping the same anatomical structure as cone-beam computed tomography. It can be used effectively for quantitative applications in radiotherapy. |
format | Online Article Text |
id | pubmed-8918752 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | SAGE Publications |
record_format | MEDLINE/PubMed |
spelling | pubmed-89187522022-03-15 Generating synthesized computed tomography from CBCT using a conditional generative adversarial network for head and neck cancer patients Zhang, Yun Ding, Sheng-gou Gong, Xiao-chang Yuan, Xing-xing Lin, Jia-fan Chen, Qi Li, Jin-gao Technol Cancer Res Treat Original Article Purpose: To overcome the imaging artifacts and Hounsfield unit inaccuracy limitations of cone-beam computed tomography, a conditional generative adversarial network is proposed to synthesize high-quality computed tomography-like images from cone-beam computed tomography images. Methods: A total of 120 paired cone-beam computed tomography and computed tomography scans of patients with head and neck cancer who were treated during January 2019 and December 2020 retrospectively collected; the scans of 90 patients were assembled into training and validation datasets, and the scans of 30 patients were used in testing datasets. The proposed method integrates a U-Net backbone architecture with residual blocks into a conditional generative adversarial network framework to learn a mapping from cone-beam computed tomography images to pair planning computed tomography images. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio were used to assess the performance of this method compared with U-Net and CycleGAN. Results: The synthesized computed tomography images produced by the conditional generative adversarial network were visually similar to planning computed tomography images. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio calculated from test images generated by conditional generative adversarial network were all significantly different than CycleGAN and U-Net. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio values between the synthesized computed tomography and the reference computed tomography were 16.75 ± 11.07 Hounsfield unit, 58.15 ± 28.64 Hounsfield unit, 0.92 ± 0.04, and 30.58 ± 3.86 dB in conditional generative adversarial network, 20.66 ± 12.15 Hounsfield unit, 66.53 ± 29.73 Hounsfield unit, 0.90 ± 0.05, and 29.29 ± 3.49 dB in CycleGAN, and 16.82 ± 10.99 Hounsfield unit, 58.68 ± 28.34 Hounsfield unit, 0.92 ± 0.04, and 30.48 ± 3.83 dB in U-Net, respectively. Conclusions: The synthesized computed tomography generated from the cone-beam computed tomography-based conditional generative adversarial network method has accurate computed tomography numbers while keeping the same anatomical structure as cone-beam computed tomography. It can be used effectively for quantitative applications in radiotherapy. SAGE Publications 2022-03-09 /pmc/articles/PMC8918752/ /pubmed/35262422 http://dx.doi.org/10.1177/15330338221085358 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by-nc/4.0/This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage). |
spellingShingle | Original Article Zhang, Yun Ding, Sheng-gou Gong, Xiao-chang Yuan, Xing-xing Lin, Jia-fan Chen, Qi Li, Jin-gao Generating synthesized computed tomography from CBCT using a conditional generative adversarial network for head and neck cancer patients |
title | Generating synthesized computed tomography from CBCT using
a conditional generative adversarial
network for head and neck
cancer patients |
title_full | Generating synthesized computed tomography from CBCT using
a conditional generative adversarial
network for head and neck
cancer patients |
title_fullStr | Generating synthesized computed tomography from CBCT using
a conditional generative adversarial
network for head and neck
cancer patients |
title_full_unstemmed | Generating synthesized computed tomography from CBCT using
a conditional generative adversarial
network for head and neck
cancer patients |
title_short | Generating synthesized computed tomography from CBCT using
a conditional generative adversarial
network for head and neck
cancer patients |
title_sort | generating synthesized computed tomography from cbct using
a conditional generative adversarial
network for head and neck
cancer patients |
topic | Original Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8918752/ https://www.ncbi.nlm.nih.gov/pubmed/35262422 http://dx.doi.org/10.1177/15330338221085358 |
work_keys_str_mv | AT zhangyun generatingsynthesizedcomputedtomographyfromcbctusingaconditionalgenerativeadversarialnetworkforheadandneckcancerpatients AT dingshenggou generatingsynthesizedcomputedtomographyfromcbctusingaconditionalgenerativeadversarialnetworkforheadandneckcancerpatients AT gongxiaochang generatingsynthesizedcomputedtomographyfromcbctusingaconditionalgenerativeadversarialnetworkforheadandneckcancerpatients AT yuanxingxing generatingsynthesizedcomputedtomographyfromcbctusingaconditionalgenerativeadversarialnetworkforheadandneckcancerpatients AT linjiafan generatingsynthesizedcomputedtomographyfromcbctusingaconditionalgenerativeadversarialnetworkforheadandneckcancerpatients AT chenqi generatingsynthesizedcomputedtomographyfromcbctusingaconditionalgenerativeadversarialnetworkforheadandneckcancerpatients AT lijingao generatingsynthesizedcomputedtomographyfromcbctusingaconditionalgenerativeadversarialnetworkforheadandneckcancerpatients |