Cargando…

Image‐based shading correction for narrow‐FOV truncated pelvic CBCT with deep convolutional neural networks and transfer learning

Purpose: Cone beam computed tomography (CBCT) is a standard solution for in‐room image guidance for radiation therapy. It is used to evaluate and compensate for anatomopathological changes between the dose delivery plan and the fraction delivery day. CBCT is a fast and versatile solution, but it suf...

Descripción completa

Detalles Bibliográficos
Autores principales: Rossi, Matteo, Belotti, Gabriele, Paganelli, Chiara, Pella, Andrea, Barcellini, Amelia, Cerveri, Pietro, Baroni, Guido
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9297981/
https://www.ncbi.nlm.nih.gov/pubmed/34636429
http://dx.doi.org/10.1002/mp.15282
_version_ 1784750596712038400
author Rossi, Matteo
Belotti, Gabriele
Paganelli, Chiara
Pella, Andrea
Barcellini, Amelia
Cerveri, Pietro
Baroni, Guido
author_facet Rossi, Matteo
Belotti, Gabriele
Paganelli, Chiara
Pella, Andrea
Barcellini, Amelia
Cerveri, Pietro
Baroni, Guido
author_sort Rossi, Matteo
collection PubMed
description Purpose: Cone beam computed tomography (CBCT) is a standard solution for in‐room image guidance for radiation therapy. It is used to evaluate and compensate for anatomopathological changes between the dose delivery plan and the fraction delivery day. CBCT is a fast and versatile solution, but it suffers from drawbacks like low contrast and requires proper calibration to derive density values. Although these limitations are even more prominent with in‐room customized CBCT systems, strategies based on deep learning have shown potential in improving image quality. As such, this article presents a method based on a convolutional neural network and a novel two‐step supervised training based on the transfer learning paradigm for shading correction in CBCT volumes with narrow field of view (FOV) acquired with an ad hoc in‐room system. Methods: We designed a U‐Net convolutional neural network, trained on axial slices of corresponding CT/CBCT couples. To improve the generalization capability of the network, we exploited two‐stage learning using two distinct data sets. At first, the network weights were trained using synthetic CBCT scans generated from a public data set, and then only the deepest layers of the network were trained again with real‐world clinical data to fine‐tune the weights. Synthetic data were generated according to real data acquisition parameters. The network takes a single grayscale volume as input and outputs the same volume with corrected shading and improved HU values. Results: Evaluation was carried out with a leave‐one‐out cross‐validation, computed on 18 unique CT/CBCT pairs from six different patients from a real‐world dataset. Comparing original CBCT to CT and improved CBCT to CT, we obtained an average improvement of 6 dB on peak signal‐to‐noise ratio (PSNR), +2% on structural similarity index measure (SSIM). The median interquartile range (IQR) Hounsfield unit (HU) difference between CBCT and CT improved from 161.37 (162.54) HU to 49.41 (66.70) HU. Region of interest (ROI)‐based HU difference was narrowed by 75% in the spongy bone (femoral head), 89% in the bladder, 85% for fat, and 83% for muscle. The improvement in contrast‐to‐noise ratio for these ROIs was about 67%. Conclusions: We demonstrated that shading correction obtaining CT‐compatible data from narrow‐FOV CBCTs acquired with a customized in‐room system is possible. Moreover, the transfer learning approach proved particularly beneficial for such a shading correction approach.
format Online
Article
Text
id pubmed-9297981
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher John Wiley and Sons Inc.
record_format MEDLINE/PubMed
spelling pubmed-92979812022-07-21 Image‐based shading correction for narrow‐FOV truncated pelvic CBCT with deep convolutional neural networks and transfer learning Rossi, Matteo Belotti, Gabriele Paganelli, Chiara Pella, Andrea Barcellini, Amelia Cerveri, Pietro Baroni, Guido Med Phys QUANTITATIVE IMAGING AND IMAGE PROCESSING Purpose: Cone beam computed tomography (CBCT) is a standard solution for in‐room image guidance for radiation therapy. It is used to evaluate and compensate for anatomopathological changes between the dose delivery plan and the fraction delivery day. CBCT is a fast and versatile solution, but it suffers from drawbacks like low contrast and requires proper calibration to derive density values. Although these limitations are even more prominent with in‐room customized CBCT systems, strategies based on deep learning have shown potential in improving image quality. As such, this article presents a method based on a convolutional neural network and a novel two‐step supervised training based on the transfer learning paradigm for shading correction in CBCT volumes with narrow field of view (FOV) acquired with an ad hoc in‐room system. Methods: We designed a U‐Net convolutional neural network, trained on axial slices of corresponding CT/CBCT couples. To improve the generalization capability of the network, we exploited two‐stage learning using two distinct data sets. At first, the network weights were trained using synthetic CBCT scans generated from a public data set, and then only the deepest layers of the network were trained again with real‐world clinical data to fine‐tune the weights. Synthetic data were generated according to real data acquisition parameters. The network takes a single grayscale volume as input and outputs the same volume with corrected shading and improved HU values. Results: Evaluation was carried out with a leave‐one‐out cross‐validation, computed on 18 unique CT/CBCT pairs from six different patients from a real‐world dataset. Comparing original CBCT to CT and improved CBCT to CT, we obtained an average improvement of 6 dB on peak signal‐to‐noise ratio (PSNR), +2% on structural similarity index measure (SSIM). The median interquartile range (IQR) Hounsfield unit (HU) difference between CBCT and CT improved from 161.37 (162.54) HU to 49.41 (66.70) HU. Region of interest (ROI)‐based HU difference was narrowed by 75% in the spongy bone (femoral head), 89% in the bladder, 85% for fat, and 83% for muscle. The improvement in contrast‐to‐noise ratio for these ROIs was about 67%. Conclusions: We demonstrated that shading correction obtaining CT‐compatible data from narrow‐FOV CBCTs acquired with a customized in‐room system is possible. Moreover, the transfer learning approach proved particularly beneficial for such a shading correction approach. John Wiley and Sons Inc. 2021-10-26 2021-11 /pmc/articles/PMC9297981/ /pubmed/34636429 http://dx.doi.org/10.1002/mp.15282 Text en © 2021 The Authors. Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine https://creativecommons.org/licenses/by/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
spellingShingle QUANTITATIVE IMAGING AND IMAGE PROCESSING
Rossi, Matteo
Belotti, Gabriele
Paganelli, Chiara
Pella, Andrea
Barcellini, Amelia
Cerveri, Pietro
Baroni, Guido
Image‐based shading correction for narrow‐FOV truncated pelvic CBCT with deep convolutional neural networks and transfer learning
title Image‐based shading correction for narrow‐FOV truncated pelvic CBCT with deep convolutional neural networks and transfer learning
title_full Image‐based shading correction for narrow‐FOV truncated pelvic CBCT with deep convolutional neural networks and transfer learning
title_fullStr Image‐based shading correction for narrow‐FOV truncated pelvic CBCT with deep convolutional neural networks and transfer learning
title_full_unstemmed Image‐based shading correction for narrow‐FOV truncated pelvic CBCT with deep convolutional neural networks and transfer learning
title_short Image‐based shading correction for narrow‐FOV truncated pelvic CBCT with deep convolutional neural networks and transfer learning
title_sort image‐based shading correction for narrow‐fov truncated pelvic cbct with deep convolutional neural networks and transfer learning
topic QUANTITATIVE IMAGING AND IMAGE PROCESSING
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9297981/
https://www.ncbi.nlm.nih.gov/pubmed/34636429
http://dx.doi.org/10.1002/mp.15282
work_keys_str_mv AT rossimatteo imagebasedshadingcorrectionfornarrowfovtruncatedpelviccbctwithdeepconvolutionalneuralnetworksandtransferlearning
AT belottigabriele imagebasedshadingcorrectionfornarrowfovtruncatedpelviccbctwithdeepconvolutionalneuralnetworksandtransferlearning
AT paganellichiara imagebasedshadingcorrectionfornarrowfovtruncatedpelviccbctwithdeepconvolutionalneuralnetworksandtransferlearning
AT pellaandrea imagebasedshadingcorrectionfornarrowfovtruncatedpelviccbctwithdeepconvolutionalneuralnetworksandtransferlearning
AT barcelliniamelia imagebasedshadingcorrectionfornarrowfovtruncatedpelviccbctwithdeepconvolutionalneuralnetworksandtransferlearning
AT cerveripietro imagebasedshadingcorrectionfornarrowfovtruncatedpelviccbctwithdeepconvolutionalneuralnetworksandtransferlearning
AT baroniguido imagebasedshadingcorrectionfornarrowfovtruncatedpelviccbctwithdeepconvolutionalneuralnetworksandtransferlearning