Cargando…
A Limited-View CT Reconstruction Framework Based on Hybrid Domains and Spatial Correlation
Limited-view Computed Tomography (CT) can be used to efficaciously reduce radiation dose in clinical diagnosis, it is also adopted when encountering inevitable mechanical and physical limitation in industrial inspection. Nevertheless, limited-view CT leads to severe artifacts in its imaging, which t...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8875841/ https://www.ncbi.nlm.nih.gov/pubmed/35214348 http://dx.doi.org/10.3390/s22041446 |
_version_ | 1784658028389203968 |
---|---|
author | Deng, Ken Sun, Chang Gong, Wuxuan Liu, Yitong Yang, Hongwen |
author_facet | Deng, Ken Sun, Chang Gong, Wuxuan Liu, Yitong Yang, Hongwen |
author_sort | Deng, Ken |
collection | PubMed |
description | Limited-view Computed Tomography (CT) can be used to efficaciously reduce radiation dose in clinical diagnosis, it is also adopted when encountering inevitable mechanical and physical limitation in industrial inspection. Nevertheless, limited-view CT leads to severe artifacts in its imaging, which turns out to be a major issue in the low dose protocol. Thus, how to exploit the limited prior information to obtain high-quality CT images becomes a crucial issue. We notice that almost all existing methods solely focus on a single CT image while neglecting the solid fact that, the scanned objects are always highly spatially correlated. Consequently, there lies bountiful spatial information between these acquired consecutive CT images, which is still largely left to be exploited. In this paper, we propose a novel hybrid-domain structure composed of fully convolutional networks that groundbreakingly explores the three-dimensional neighborhood and works in a “coarse-to-fine” manner. We first conduct data completion in the Radon domain, and transform the obtained full-view Radon data into images through FBP. Subsequently, we employ the spatial correlation between continuous CT images to productively restore them and then refine the image texture to finally receive the ideal high-quality CT images, achieving PSNR of 40.209 and SSIM of 0.943. Besides, unlike other current limited-view CT reconstruction methods, we adopt FBP (and implement it on GPUs) instead of SART-TV to significantly accelerate the overall procedure and realize it in an end-to-end manner. |
format | Online Article Text |
id | pubmed-8875841 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-88758412022-02-26 A Limited-View CT Reconstruction Framework Based on Hybrid Domains and Spatial Correlation Deng, Ken Sun, Chang Gong, Wuxuan Liu, Yitong Yang, Hongwen Sensors (Basel) Article Limited-view Computed Tomography (CT) can be used to efficaciously reduce radiation dose in clinical diagnosis, it is also adopted when encountering inevitable mechanical and physical limitation in industrial inspection. Nevertheless, limited-view CT leads to severe artifacts in its imaging, which turns out to be a major issue in the low dose protocol. Thus, how to exploit the limited prior information to obtain high-quality CT images becomes a crucial issue. We notice that almost all existing methods solely focus on a single CT image while neglecting the solid fact that, the scanned objects are always highly spatially correlated. Consequently, there lies bountiful spatial information between these acquired consecutive CT images, which is still largely left to be exploited. In this paper, we propose a novel hybrid-domain structure composed of fully convolutional networks that groundbreakingly explores the three-dimensional neighborhood and works in a “coarse-to-fine” manner. We first conduct data completion in the Radon domain, and transform the obtained full-view Radon data into images through FBP. Subsequently, we employ the spatial correlation between continuous CT images to productively restore them and then refine the image texture to finally receive the ideal high-quality CT images, achieving PSNR of 40.209 and SSIM of 0.943. Besides, unlike other current limited-view CT reconstruction methods, we adopt FBP (and implement it on GPUs) instead of SART-TV to significantly accelerate the overall procedure and realize it in an end-to-end manner. MDPI 2022-02-13 /pmc/articles/PMC8875841/ /pubmed/35214348 http://dx.doi.org/10.3390/s22041446 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Deng, Ken Sun, Chang Gong, Wuxuan Liu, Yitong Yang, Hongwen A Limited-View CT Reconstruction Framework Based on Hybrid Domains and Spatial Correlation |
title | A Limited-View CT Reconstruction Framework Based on Hybrid Domains and Spatial Correlation |
title_full | A Limited-View CT Reconstruction Framework Based on Hybrid Domains and Spatial Correlation |
title_fullStr | A Limited-View CT Reconstruction Framework Based on Hybrid Domains and Spatial Correlation |
title_full_unstemmed | A Limited-View CT Reconstruction Framework Based on Hybrid Domains and Spatial Correlation |
title_short | A Limited-View CT Reconstruction Framework Based on Hybrid Domains and Spatial Correlation |
title_sort | limited-view ct reconstruction framework based on hybrid domains and spatial correlation |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8875841/ https://www.ncbi.nlm.nih.gov/pubmed/35214348 http://dx.doi.org/10.3390/s22041446 |
work_keys_str_mv | AT dengken alimitedviewctreconstructionframeworkbasedonhybriddomainsandspatialcorrelation AT sunchang alimitedviewctreconstructionframeworkbasedonhybriddomainsandspatialcorrelation AT gongwuxuan alimitedviewctreconstructionframeworkbasedonhybriddomainsandspatialcorrelation AT liuyitong alimitedviewctreconstructionframeworkbasedonhybriddomainsandspatialcorrelation AT yanghongwen alimitedviewctreconstructionframeworkbasedonhybriddomainsandspatialcorrelation AT dengken limitedviewctreconstructionframeworkbasedonhybriddomainsandspatialcorrelation AT sunchang limitedviewctreconstructionframeworkbasedonhybriddomainsandspatialcorrelation AT gongwuxuan limitedviewctreconstructionframeworkbasedonhybriddomainsandspatialcorrelation AT liuyitong limitedviewctreconstructionframeworkbasedonhybriddomainsandspatialcorrelation AT yanghongwen limitedviewctreconstructionframeworkbasedonhybriddomainsandspatialcorrelation |