Cargando…
Image Super-Resolution via Dual-Level Recurrent Residual Networks
Recently, the feedforward architecture of a super-resolution network based on deep learning was proposed to learn the representation of a low-resolution (LR) input and the non-linear mapping from these inputs to a high-resolution (HR) output, but this method cannot completely solve the interdependen...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9032326/ https://www.ncbi.nlm.nih.gov/pubmed/35459043 http://dx.doi.org/10.3390/s22083058 |
_version_ | 1784692615019495424 |
---|---|
author | Tan, Congming Wang, Liejun Cheng, Shuli |
author_facet | Tan, Congming Wang, Liejun Cheng, Shuli |
author_sort | Tan, Congming |
collection | PubMed |
description | Recently, the feedforward architecture of a super-resolution network based on deep learning was proposed to learn the representation of a low-resolution (LR) input and the non-linear mapping from these inputs to a high-resolution (HR) output, but this method cannot completely solve the interdependence between LR and HR images. In this paper, we retain the feedforward architecture and introduce residuals to a dual-level; therefore, we propose the dual-level recurrent residual network (DLRRN) to generate an HR image with rich details and satisfactory vision. Compared with feedforward networks that operate at a fixed spatial resolution, the dual-level recurrent residual block (DLRRB) in DLRRN utilizes both LR and HR space information. The circular signals in DLRRB enhance spatial details by the mutual guidance between two directions (LR to HR and HR to LR). Specifically, the LR information of the current layer is generated by the HR and LR information of the previous layer. Then, the HR information of the previous layer and LR information of the current layer jointly generate the HR information of the current layer, and so on. The proposed DLRRN has a strong ability for early reconstruction and can gradually restore the final high-resolution image. An extensive quantitative and qualitative evaluation of the benchmark dataset was carried out, and the experimental results proved that our network achieved good results in terms of network parameters, visual effects and objective performance metrics. |
format | Online Article Text |
id | pubmed-9032326 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-90323262022-04-23 Image Super-Resolution via Dual-Level Recurrent Residual Networks Tan, Congming Wang, Liejun Cheng, Shuli Sensors (Basel) Article Recently, the feedforward architecture of a super-resolution network based on deep learning was proposed to learn the representation of a low-resolution (LR) input and the non-linear mapping from these inputs to a high-resolution (HR) output, but this method cannot completely solve the interdependence between LR and HR images. In this paper, we retain the feedforward architecture and introduce residuals to a dual-level; therefore, we propose the dual-level recurrent residual network (DLRRN) to generate an HR image with rich details and satisfactory vision. Compared with feedforward networks that operate at a fixed spatial resolution, the dual-level recurrent residual block (DLRRB) in DLRRN utilizes both LR and HR space information. The circular signals in DLRRB enhance spatial details by the mutual guidance between two directions (LR to HR and HR to LR). Specifically, the LR information of the current layer is generated by the HR and LR information of the previous layer. Then, the HR information of the previous layer and LR information of the current layer jointly generate the HR information of the current layer, and so on. The proposed DLRRN has a strong ability for early reconstruction and can gradually restore the final high-resolution image. An extensive quantitative and qualitative evaluation of the benchmark dataset was carried out, and the experimental results proved that our network achieved good results in terms of network parameters, visual effects and objective performance metrics. MDPI 2022-04-15 /pmc/articles/PMC9032326/ /pubmed/35459043 http://dx.doi.org/10.3390/s22083058 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Tan, Congming Wang, Liejun Cheng, Shuli Image Super-Resolution via Dual-Level Recurrent Residual Networks |
title | Image Super-Resolution via Dual-Level Recurrent Residual Networks |
title_full | Image Super-Resolution via Dual-Level Recurrent Residual Networks |
title_fullStr | Image Super-Resolution via Dual-Level Recurrent Residual Networks |
title_full_unstemmed | Image Super-Resolution via Dual-Level Recurrent Residual Networks |
title_short | Image Super-Resolution via Dual-Level Recurrent Residual Networks |
title_sort | image super-resolution via dual-level recurrent residual networks |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9032326/ https://www.ncbi.nlm.nih.gov/pubmed/35459043 http://dx.doi.org/10.3390/s22083058 |
work_keys_str_mv | AT tancongming imagesuperresolutionviaduallevelrecurrentresidualnetworks AT wangliejun imagesuperresolutionviaduallevelrecurrentresidualnetworks AT chengshuli imagesuperresolutionviaduallevelrecurrentresidualnetworks |