Cargando…

DVDR-SRGAN: Differential Value Dense Residual Super-Resolution Generative Adversarial Network

In the field of single-image super-resolution reconstruction, GAN can obtain the image texture more in line with the human eye. However, during the reconstruction process, it is easy to generate artifacts, false textures, and large deviations in details between the reconstructed image and the Ground...

Descripción completa

Detalles Bibliográficos
Autores principales: Qu, Hang, Yi, Huawei, Shi, Yanlan, Lan, Jie
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10221380/
https://www.ncbi.nlm.nih.gov/pubmed/37430768
http://dx.doi.org/10.3390/s23104854
_version_ 1785049442293907456
author Qu, Hang
Yi, Huawei
Shi, Yanlan
Lan, Jie
author_facet Qu, Hang
Yi, Huawei
Shi, Yanlan
Lan, Jie
author_sort Qu, Hang
collection PubMed
description In the field of single-image super-resolution reconstruction, GAN can obtain the image texture more in line with the human eye. However, during the reconstruction process, it is easy to generate artifacts, false textures, and large deviations in details between the reconstructed image and the Ground Truth. In order to further improve the visual quality, we study the feature correlation between adjacent layers and propose a differential value dense residual network to solve this problem. We first use the deconvolution layer to enlarge the features, then extract the features through the convolution layer, and finally make a difference between the features before being magnified and the features after being extracted so that the difference can better reflect the areas that need attention. In the process of extracting the differential value, using the dense residual connection method for each layer can make the magnified features more complete, so the differential value obtained is more accurate. Next, the joint loss function is introduced to fuse high-frequency information and low-frequency information, which improves the visual effect of the reconstructed image to a certain extent. The experimental results on Set5, Set14, BSD100, and Urban datasets show that our proposed DVDR-SRGAN model is improved in terms of PSNR, SSIM, and LPIPS compared with the Bicubic, SRGAN, ESRGAN, Beby-GAN, and SPSR models.
format Online
Article
Text
id pubmed-10221380
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-102213802023-05-28 DVDR-SRGAN: Differential Value Dense Residual Super-Resolution Generative Adversarial Network Qu, Hang Yi, Huawei Shi, Yanlan Lan, Jie Sensors (Basel) Article In the field of single-image super-resolution reconstruction, GAN can obtain the image texture more in line with the human eye. However, during the reconstruction process, it is easy to generate artifacts, false textures, and large deviations in details between the reconstructed image and the Ground Truth. In order to further improve the visual quality, we study the feature correlation between adjacent layers and propose a differential value dense residual network to solve this problem. We first use the deconvolution layer to enlarge the features, then extract the features through the convolution layer, and finally make a difference between the features before being magnified and the features after being extracted so that the difference can better reflect the areas that need attention. In the process of extracting the differential value, using the dense residual connection method for each layer can make the magnified features more complete, so the differential value obtained is more accurate. Next, the joint loss function is introduced to fuse high-frequency information and low-frequency information, which improves the visual effect of the reconstructed image to a certain extent. The experimental results on Set5, Set14, BSD100, and Urban datasets show that our proposed DVDR-SRGAN model is improved in terms of PSNR, SSIM, and LPIPS compared with the Bicubic, SRGAN, ESRGAN, Beby-GAN, and SPSR models. MDPI 2023-05-18 /pmc/articles/PMC10221380/ /pubmed/37430768 http://dx.doi.org/10.3390/s23104854 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Qu, Hang
Yi, Huawei
Shi, Yanlan
Lan, Jie
DVDR-SRGAN: Differential Value Dense Residual Super-Resolution Generative Adversarial Network
title DVDR-SRGAN: Differential Value Dense Residual Super-Resolution Generative Adversarial Network
title_full DVDR-SRGAN: Differential Value Dense Residual Super-Resolution Generative Adversarial Network
title_fullStr DVDR-SRGAN: Differential Value Dense Residual Super-Resolution Generative Adversarial Network
title_full_unstemmed DVDR-SRGAN: Differential Value Dense Residual Super-Resolution Generative Adversarial Network
title_short DVDR-SRGAN: Differential Value Dense Residual Super-Resolution Generative Adversarial Network
title_sort dvdr-srgan: differential value dense residual super-resolution generative adversarial network
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10221380/
https://www.ncbi.nlm.nih.gov/pubmed/37430768
http://dx.doi.org/10.3390/s23104854
work_keys_str_mv AT quhang dvdrsrgandifferentialvaluedenseresidualsuperresolutiongenerativeadversarialnetwork
AT yihuawei dvdrsrgandifferentialvaluedenseresidualsuperresolutiongenerativeadversarialnetwork
AT shiyanlan dvdrsrgandifferentialvaluedenseresidualsuperresolutiongenerativeadversarialnetwork
AT lanjie dvdrsrgandifferentialvaluedenseresidualsuperresolutiongenerativeadversarialnetwork