Cargando…
Attention Network with Information Distillation for Super-Resolution
Resolution is an intuitive assessment for the visual quality of images, which is limited by physical devices. Recently, image super-resolution (SR) models based on deep convolutional neural networks (CNNs) have made significant progress. However, most existing SR models require high computational co...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9497852/ https://www.ncbi.nlm.nih.gov/pubmed/36141112 http://dx.doi.org/10.3390/e24091226 |
_version_ | 1784794609641062400 |
---|---|
author | Zang, Huaijuan Zhao, Ying Niu, Chao Zhang, Haiyan Zhan, Shu |
author_facet | Zang, Huaijuan Zhao, Ying Niu, Chao Zhang, Haiyan Zhan, Shu |
author_sort | Zang, Huaijuan |
collection | PubMed |
description | Resolution is an intuitive assessment for the visual quality of images, which is limited by physical devices. Recently, image super-resolution (SR) models based on deep convolutional neural networks (CNNs) have made significant progress. However, most existing SR models require high computational costs with network depth, hindering practical application. In addition, these models treat intermediate features equally and rarely explore the discriminative capacity hidden in their abundant features. To tackle these issues, we propose an attention network with information distillation(AIDN) for efficient and accurate image super-resolution, which adaptively modulates the feature responses by modeling the interactions between channel dimension and spatial features. Specifically, gated channel transformation (GCT) is introduced to gather global contextual information among different channels to modulate intermediate high-level features. Moreover, a recalibrated attention module (RAM) is proposed to rescale these feature responses, and RAM concentrates the essential contents around spatial locations. Benefiting from the gated channel transformation and spatial information masks working jointly, our proposed AIDN can obtain a more powerful ability to identify information. It effectively improves computational efficiency while improving reconstruction accuracy. Comprehensive quantitative and qualitative evaluations demonstrate that our AIDN outperforms state-of-the-art models in terms of reconstruction performance and visual quality. |
format | Online Article Text |
id | pubmed-9497852 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-94978522022-09-23 Attention Network with Information Distillation for Super-Resolution Zang, Huaijuan Zhao, Ying Niu, Chao Zhang, Haiyan Zhan, Shu Entropy (Basel) Article Resolution is an intuitive assessment for the visual quality of images, which is limited by physical devices. Recently, image super-resolution (SR) models based on deep convolutional neural networks (CNNs) have made significant progress. However, most existing SR models require high computational costs with network depth, hindering practical application. In addition, these models treat intermediate features equally and rarely explore the discriminative capacity hidden in their abundant features. To tackle these issues, we propose an attention network with information distillation(AIDN) for efficient and accurate image super-resolution, which adaptively modulates the feature responses by modeling the interactions between channel dimension and spatial features. Specifically, gated channel transformation (GCT) is introduced to gather global contextual information among different channels to modulate intermediate high-level features. Moreover, a recalibrated attention module (RAM) is proposed to rescale these feature responses, and RAM concentrates the essential contents around spatial locations. Benefiting from the gated channel transformation and spatial information masks working jointly, our proposed AIDN can obtain a more powerful ability to identify information. It effectively improves computational efficiency while improving reconstruction accuracy. Comprehensive quantitative and qualitative evaluations demonstrate that our AIDN outperforms state-of-the-art models in terms of reconstruction performance and visual quality. MDPI 2022-09-01 /pmc/articles/PMC9497852/ /pubmed/36141112 http://dx.doi.org/10.3390/e24091226 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zang, Huaijuan Zhao, Ying Niu, Chao Zhang, Haiyan Zhan, Shu Attention Network with Information Distillation for Super-Resolution |
title | Attention Network with Information Distillation for Super-Resolution |
title_full | Attention Network with Information Distillation for Super-Resolution |
title_fullStr | Attention Network with Information Distillation for Super-Resolution |
title_full_unstemmed | Attention Network with Information Distillation for Super-Resolution |
title_short | Attention Network with Information Distillation for Super-Resolution |
title_sort | attention network with information distillation for super-resolution |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9497852/ https://www.ncbi.nlm.nih.gov/pubmed/36141112 http://dx.doi.org/10.3390/e24091226 |
work_keys_str_mv | AT zanghuaijuan attentionnetworkwithinformationdistillationforsuperresolution AT zhaoying attentionnetworkwithinformationdistillationforsuperresolution AT niuchao attentionnetworkwithinformationdistillationforsuperresolution AT zhanghaiyan attentionnetworkwithinformationdistillationforsuperresolution AT zhanshu attentionnetworkwithinformationdistillationforsuperresolution |