Cargando…

Generation and Evaluation of Synthetic Computed Tomography (CT) from Cone-Beam CT (CBCT) by Incorporating Feature-Driven Loss into Intensity-Based Loss Functions in Deep Convolutional Neural Network

SIMPLE SUMMARY: Despite numerous benefits of cone-beam computed tomography (CBCT), its applications to radiotherapy were limited mainly due to degraded image quality. Recently, enhancing the CBCT image quality by generating synthetic CT image by deep convolutional neural network (CNN) has become fre...

Descripción completa

Detalles Bibliográficos
Autores principales: Yoo, Sang Kyun, Kim, Hojin, Choi, Byoung Su, Park, Inkyung, Kim, Jin Sung
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9497126/
https://www.ncbi.nlm.nih.gov/pubmed/36139692
http://dx.doi.org/10.3390/cancers14184534
_version_ 1784794437301305344
author Yoo, Sang Kyun
Kim, Hojin
Choi, Byoung Su
Park, Inkyung
Kim, Jin Sung
author_facet Yoo, Sang Kyun
Kim, Hojin
Choi, Byoung Su
Park, Inkyung
Kim, Jin Sung
author_sort Yoo, Sang Kyun
collection PubMed
description SIMPLE SUMMARY: Despite numerous benefits of cone-beam computed tomography (CBCT), its applications to radiotherapy were limited mainly due to degraded image quality. Recently, enhancing the CBCT image quality by generating synthetic CT image by deep convolutional neural network (CNN) has become frequent. Most of the previous works, however, generated synthetic CT with simple, classical intensity-driven loss in network training, while not specifying a full-package of verifications. This work trained the network by combining feature- and intensity-driven losses and attempted to demonstrate clinical relevance of the synthetic CT images by assessing both image similarity and dose calculating accuracy throughout a commercial Monte-Carlo. ABSTRACT: Deep convolutional neural network (CNN) helped enhance image quality of cone-beam computed tomography (CBCT) by generating synthetic CT. Most of the previous works, however, trained network by intensity-based loss functions, possibly undermining to promote image feature similarity. The verifications were not sufficient to demonstrate clinical applicability, either. This work investigated the effect of variable loss functions combining feature- and intensity-driven losses in synthetic CT generation, followed by strengthening the verification of generated images in both image similarity and dosimetry accuracy. The proposed strategy highlighted the feature-driven quantification in (1) training the network by perceptual loss, besides L1 and structural similarity (SSIM) losses regarding anatomical similarity, and (2) evaluating image similarity by feature mapping ratio (FMR), besides conventional metrics. In addition, the synthetic CT images were assessed in terms of dose calculating accuracy by a commercial Monte-Carlo algorithm. The network was trained with 50 paired CBCT-CT scans acquired at the same CT simulator and treatment unit to constrain environmental factors any other than loss functions. For 10 independent cases, incorporating perceptual loss into L1 and SSIM losses outperformed the other combinations, which enhanced FMR of image similarity by 10%, and the dose calculating accuracy by 1–2% of gamma passing rate in 1%/1mm criterion.
format Online
Article
Text
id pubmed-9497126
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-94971262022-09-23 Generation and Evaluation of Synthetic Computed Tomography (CT) from Cone-Beam CT (CBCT) by Incorporating Feature-Driven Loss into Intensity-Based Loss Functions in Deep Convolutional Neural Network Yoo, Sang Kyun Kim, Hojin Choi, Byoung Su Park, Inkyung Kim, Jin Sung Cancers (Basel) Article SIMPLE SUMMARY: Despite numerous benefits of cone-beam computed tomography (CBCT), its applications to radiotherapy were limited mainly due to degraded image quality. Recently, enhancing the CBCT image quality by generating synthetic CT image by deep convolutional neural network (CNN) has become frequent. Most of the previous works, however, generated synthetic CT with simple, classical intensity-driven loss in network training, while not specifying a full-package of verifications. This work trained the network by combining feature- and intensity-driven losses and attempted to demonstrate clinical relevance of the synthetic CT images by assessing both image similarity and dose calculating accuracy throughout a commercial Monte-Carlo. ABSTRACT: Deep convolutional neural network (CNN) helped enhance image quality of cone-beam computed tomography (CBCT) by generating synthetic CT. Most of the previous works, however, trained network by intensity-based loss functions, possibly undermining to promote image feature similarity. The verifications were not sufficient to demonstrate clinical applicability, either. This work investigated the effect of variable loss functions combining feature- and intensity-driven losses in synthetic CT generation, followed by strengthening the verification of generated images in both image similarity and dosimetry accuracy. The proposed strategy highlighted the feature-driven quantification in (1) training the network by perceptual loss, besides L1 and structural similarity (SSIM) losses regarding anatomical similarity, and (2) evaluating image similarity by feature mapping ratio (FMR), besides conventional metrics. In addition, the synthetic CT images were assessed in terms of dose calculating accuracy by a commercial Monte-Carlo algorithm. The network was trained with 50 paired CBCT-CT scans acquired at the same CT simulator and treatment unit to constrain environmental factors any other than loss functions. For 10 independent cases, incorporating perceptual loss into L1 and SSIM losses outperformed the other combinations, which enhanced FMR of image similarity by 10%, and the dose calculating accuracy by 1–2% of gamma passing rate in 1%/1mm criterion. MDPI 2022-09-19 /pmc/articles/PMC9497126/ /pubmed/36139692 http://dx.doi.org/10.3390/cancers14184534 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Yoo, Sang Kyun
Kim, Hojin
Choi, Byoung Su
Park, Inkyung
Kim, Jin Sung
Generation and Evaluation of Synthetic Computed Tomography (CT) from Cone-Beam CT (CBCT) by Incorporating Feature-Driven Loss into Intensity-Based Loss Functions in Deep Convolutional Neural Network
title Generation and Evaluation of Synthetic Computed Tomography (CT) from Cone-Beam CT (CBCT) by Incorporating Feature-Driven Loss into Intensity-Based Loss Functions in Deep Convolutional Neural Network
title_full Generation and Evaluation of Synthetic Computed Tomography (CT) from Cone-Beam CT (CBCT) by Incorporating Feature-Driven Loss into Intensity-Based Loss Functions in Deep Convolutional Neural Network
title_fullStr Generation and Evaluation of Synthetic Computed Tomography (CT) from Cone-Beam CT (CBCT) by Incorporating Feature-Driven Loss into Intensity-Based Loss Functions in Deep Convolutional Neural Network
title_full_unstemmed Generation and Evaluation of Synthetic Computed Tomography (CT) from Cone-Beam CT (CBCT) by Incorporating Feature-Driven Loss into Intensity-Based Loss Functions in Deep Convolutional Neural Network
title_short Generation and Evaluation of Synthetic Computed Tomography (CT) from Cone-Beam CT (CBCT) by Incorporating Feature-Driven Loss into Intensity-Based Loss Functions in Deep Convolutional Neural Network
title_sort generation and evaluation of synthetic computed tomography (ct) from cone-beam ct (cbct) by incorporating feature-driven loss into intensity-based loss functions in deep convolutional neural network
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9497126/
https://www.ncbi.nlm.nih.gov/pubmed/36139692
http://dx.doi.org/10.3390/cancers14184534
work_keys_str_mv AT yoosangkyun generationandevaluationofsyntheticcomputedtomographyctfromconebeamctcbctbyincorporatingfeaturedrivenlossintointensitybasedlossfunctionsindeepconvolutionalneuralnetwork
AT kimhojin generationandevaluationofsyntheticcomputedtomographyctfromconebeamctcbctbyincorporatingfeaturedrivenlossintointensitybasedlossfunctionsindeepconvolutionalneuralnetwork
AT choibyoungsu generationandevaluationofsyntheticcomputedtomographyctfromconebeamctcbctbyincorporatingfeaturedrivenlossintointensitybasedlossfunctionsindeepconvolutionalneuralnetwork
AT parkinkyung generationandevaluationofsyntheticcomputedtomographyctfromconebeamctcbctbyincorporatingfeaturedrivenlossintointensitybasedlossfunctionsindeepconvolutionalneuralnetwork
AT kimjinsung generationandevaluationofsyntheticcomputedtomographyctfromconebeamctcbctbyincorporatingfeaturedrivenlossintointensitybasedlossfunctionsindeepconvolutionalneuralnetwork