Cargando…
Cone Beam Computed Tomography Image Quality Improvement Using a Deep Convolutional Neural Network
Introduction Cone beam computed tomography (CBCT) plays an important role in image-guided radiation therapy (IGRT), while having disadvantages of severe shading artifact caused by the reconstruction using scatter contaminated and truncated projections. The purpose of this study is to develop a deep...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cureus
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6021187/ https://www.ncbi.nlm.nih.gov/pubmed/29963342 http://dx.doi.org/10.7759/cureus.2548 |
_version_ | 1783335426245787648 |
---|---|
author | Kida, Satoshi Nakamoto, Takahiro Nakano, Masahiro Nawa, Kanabu Haga, Akihiro Kotoku, Jun'ichi Yamashita, Hideomi Nakagawa, Keiichi |
author_facet | Kida, Satoshi Nakamoto, Takahiro Nakano, Masahiro Nawa, Kanabu Haga, Akihiro Kotoku, Jun'ichi Yamashita, Hideomi Nakagawa, Keiichi |
author_sort | Kida, Satoshi |
collection | PubMed |
description | Introduction Cone beam computed tomography (CBCT) plays an important role in image-guided radiation therapy (IGRT), while having disadvantages of severe shading artifact caused by the reconstruction using scatter contaminated and truncated projections. The purpose of this study is to develop a deep convolutional neural network (DCNN) method for improving CBCT image quality. Methods CBCT and planning computed tomography (pCT) image pairs from 20 prostate cancer patients were selected. Subsequently, each pCT volume was pre-aligned to the corresponding CBCT volume by image registration, thereby leading to registered pCT data (pCT(r)). Next, a 39-layer DCNN model was trained to learn a direct mapping from the CBCT to the corresponding pCT(r )images. The trained model was applied to a new CBCT data set to obtain improved CBCT (i-CBCT) images. The resulting i-CBCT images were compared to pCT(r) using the spatial non-uniformity (SNU), the peak-signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM). Results The image quality of the i-CBCT has shown a substantial improvement on spatial uniformity compared to that of the original CBCT, and a significant improvement on the PSNR and the SSIM compared to that of the original CBCT and the enhanced CBCT by the existing pCT-based correction method. Conclusion We have developed a DCNN method for improving CBCT image quality. The proposed method may be directly applicable to CBCT images acquired by any commercial CBCT scanner. |
format | Online Article Text |
id | pubmed-6021187 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | Cureus |
record_format | MEDLINE/PubMed |
spelling | pubmed-60211872018-06-29 Cone Beam Computed Tomography Image Quality Improvement Using a Deep Convolutional Neural Network Kida, Satoshi Nakamoto, Takahiro Nakano, Masahiro Nawa, Kanabu Haga, Akihiro Kotoku, Jun'ichi Yamashita, Hideomi Nakagawa, Keiichi Cureus Medical Physics Introduction Cone beam computed tomography (CBCT) plays an important role in image-guided radiation therapy (IGRT), while having disadvantages of severe shading artifact caused by the reconstruction using scatter contaminated and truncated projections. The purpose of this study is to develop a deep convolutional neural network (DCNN) method for improving CBCT image quality. Methods CBCT and planning computed tomography (pCT) image pairs from 20 prostate cancer patients were selected. Subsequently, each pCT volume was pre-aligned to the corresponding CBCT volume by image registration, thereby leading to registered pCT data (pCT(r)). Next, a 39-layer DCNN model was trained to learn a direct mapping from the CBCT to the corresponding pCT(r )images. The trained model was applied to a new CBCT data set to obtain improved CBCT (i-CBCT) images. The resulting i-CBCT images were compared to pCT(r) using the spatial non-uniformity (SNU), the peak-signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM). Results The image quality of the i-CBCT has shown a substantial improvement on spatial uniformity compared to that of the original CBCT, and a significant improvement on the PSNR and the SSIM compared to that of the original CBCT and the enhanced CBCT by the existing pCT-based correction method. Conclusion We have developed a DCNN method for improving CBCT image quality. The proposed method may be directly applicable to CBCT images acquired by any commercial CBCT scanner. Cureus 2018-04-29 /pmc/articles/PMC6021187/ /pubmed/29963342 http://dx.doi.org/10.7759/cureus.2548 Text en Copyright © 2018, Kida et al. http://creativecommons.org/licenses/by/3.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Medical Physics Kida, Satoshi Nakamoto, Takahiro Nakano, Masahiro Nawa, Kanabu Haga, Akihiro Kotoku, Jun'ichi Yamashita, Hideomi Nakagawa, Keiichi Cone Beam Computed Tomography Image Quality Improvement Using a Deep Convolutional Neural Network |
title | Cone Beam Computed Tomography Image Quality Improvement Using a Deep Convolutional Neural Network |
title_full | Cone Beam Computed Tomography Image Quality Improvement Using a Deep Convolutional Neural Network |
title_fullStr | Cone Beam Computed Tomography Image Quality Improvement Using a Deep Convolutional Neural Network |
title_full_unstemmed | Cone Beam Computed Tomography Image Quality Improvement Using a Deep Convolutional Neural Network |
title_short | Cone Beam Computed Tomography Image Quality Improvement Using a Deep Convolutional Neural Network |
title_sort | cone beam computed tomography image quality improvement using a deep convolutional neural network |
topic | Medical Physics |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6021187/ https://www.ncbi.nlm.nih.gov/pubmed/29963342 http://dx.doi.org/10.7759/cureus.2548 |
work_keys_str_mv | AT kidasatoshi conebeamcomputedtomographyimagequalityimprovementusingadeepconvolutionalneuralnetwork AT nakamototakahiro conebeamcomputedtomographyimagequalityimprovementusingadeepconvolutionalneuralnetwork AT nakanomasahiro conebeamcomputedtomographyimagequalityimprovementusingadeepconvolutionalneuralnetwork AT nawakanabu conebeamcomputedtomographyimagequalityimprovementusingadeepconvolutionalneuralnetwork AT hagaakihiro conebeamcomputedtomographyimagequalityimprovementusingadeepconvolutionalneuralnetwork AT kotokujunichi conebeamcomputedtomographyimagequalityimprovementusingadeepconvolutionalneuralnetwork AT yamashitahideomi conebeamcomputedtomographyimagequalityimprovementusingadeepconvolutionalneuralnetwork AT nakagawakeiichi conebeamcomputedtomographyimagequalityimprovementusingadeepconvolutionalneuralnetwork |