Cargando…

Degradation-Aware Deep Learning Framework for Sparse-View CT Reconstruction

Sparse-view CT reconstruction is a fundamental task in computed tomography to overcome undesired artifacts and recover the details of textual structure in degraded CT images. Recently, many deep learning-based networks have achieved desirable performances compared to iterative reconstruction algorit...

Descripción completa

Detalles Bibliográficos
Autores principales: Sun, Chang, Liu, Yitong, Yang, Hongwen
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8704775/
https://www.ncbi.nlm.nih.gov/pubmed/34941649
http://dx.doi.org/10.3390/tomography7040077
_version_ 1784621788322332672
author Sun, Chang
Liu, Yitong
Yang, Hongwen
author_facet Sun, Chang
Liu, Yitong
Yang, Hongwen
author_sort Sun, Chang
collection PubMed
description Sparse-view CT reconstruction is a fundamental task in computed tomography to overcome undesired artifacts and recover the details of textual structure in degraded CT images. Recently, many deep learning-based networks have achieved desirable performances compared to iterative reconstruction algorithms. However, the performance of these methods may severely deteriorate when the degradation strength of the test image is not consistent with that of the training dataset. In addition, these methods do not pay enough attention to the characteristics of different degradation levels, so solely extending the training dataset with multiple degraded images is also not effective. Although training plentiful models in terms of each degradation level can mitigate this problem, extensive parameter storage is involved. Accordingly, in this paper, we focused on sparse-view CT reconstruction for multiple degradation levels. We propose a single degradation-aware deep learning framework to predict clear CT images by understanding the disparity of degradation in both the frequency domain and image domain. The dual-domain procedure can perform particular operations at different degradation levels in frequency component recovery and spatial details reconstruction. The peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and visual results demonstrate that our method outperformed the classical deep learning-based reconstruction methods in terms of effectiveness and scalability.
format Online
Article
Text
id pubmed-8704775
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-87047752021-12-25 Degradation-Aware Deep Learning Framework for Sparse-View CT Reconstruction Sun, Chang Liu, Yitong Yang, Hongwen Tomography Article Sparse-view CT reconstruction is a fundamental task in computed tomography to overcome undesired artifacts and recover the details of textual structure in degraded CT images. Recently, many deep learning-based networks have achieved desirable performances compared to iterative reconstruction algorithms. However, the performance of these methods may severely deteriorate when the degradation strength of the test image is not consistent with that of the training dataset. In addition, these methods do not pay enough attention to the characteristics of different degradation levels, so solely extending the training dataset with multiple degraded images is also not effective. Although training plentiful models in terms of each degradation level can mitigate this problem, extensive parameter storage is involved. Accordingly, in this paper, we focused on sparse-view CT reconstruction for multiple degradation levels. We propose a single degradation-aware deep learning framework to predict clear CT images by understanding the disparity of degradation in both the frequency domain and image domain. The dual-domain procedure can perform particular operations at different degradation levels in frequency component recovery and spatial details reconstruction. The peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and visual results demonstrate that our method outperformed the classical deep learning-based reconstruction methods in terms of effectiveness and scalability. MDPI 2021-12-09 /pmc/articles/PMC8704775/ /pubmed/34941649 http://dx.doi.org/10.3390/tomography7040077 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Sun, Chang
Liu, Yitong
Yang, Hongwen
Degradation-Aware Deep Learning Framework for Sparse-View CT Reconstruction
title Degradation-Aware Deep Learning Framework for Sparse-View CT Reconstruction
title_full Degradation-Aware Deep Learning Framework for Sparse-View CT Reconstruction
title_fullStr Degradation-Aware Deep Learning Framework for Sparse-View CT Reconstruction
title_full_unstemmed Degradation-Aware Deep Learning Framework for Sparse-View CT Reconstruction
title_short Degradation-Aware Deep Learning Framework for Sparse-View CT Reconstruction
title_sort degradation-aware deep learning framework for sparse-view ct reconstruction
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8704775/
https://www.ncbi.nlm.nih.gov/pubmed/34941649
http://dx.doi.org/10.3390/tomography7040077
work_keys_str_mv AT sunchang degradationawaredeeplearningframeworkforsparseviewctreconstruction
AT liuyitong degradationawaredeeplearningframeworkforsparseviewctreconstruction
AT yanghongwen degradationawaredeeplearningframeworkforsparseviewctreconstruction