Cargando…
Transformer-based progressive residual network for single image dehazing
INTRODUCTION: The seriously degraded fogging image affects the further visual tasks. How to obtain a fog-free image is not only challenging, but also important in computer vision. Recently, the vision transformer (ViT) architecture has achieved very efficient performance in several vision areas. MET...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9766349/ https://www.ncbi.nlm.nih.gov/pubmed/36561916 http://dx.doi.org/10.3389/fnbot.2022.1084543 |
_version_ | 1784853708814680064 |
---|---|
author | Yang, Zhe Li, Xiaoling Li, Jinjiang |
author_facet | Yang, Zhe Li, Xiaoling Li, Jinjiang |
author_sort | Yang, Zhe |
collection | PubMed |
description | INTRODUCTION: The seriously degraded fogging image affects the further visual tasks. How to obtain a fog-free image is not only challenging, but also important in computer vision. Recently, the vision transformer (ViT) architecture has achieved very efficient performance in several vision areas. METHODS: In this paper, we propose a new transformer-based progressive residual network. Different from the existing single-stage ViT architecture, we recursively call the progressive residual network with the introduction of swin transformer. Specifically, our progressive residual network consists of three main components: the recurrent block, the transformer codecs and the supervise fusion module. First, the recursive block learns the features of the input image, while connecting the original image features of the original iteration. Then, the encoder introduces the swin transformer block to encode the feature representation of the decomposed block, and continuously reduces the feature mapping resolution to extract remote context features. The decoder recursively selects and fuses image features by combining attention mechanism and dense residual blocks. In addition, we add a channel attention mechanism between codecs to focus on the importance of different features. RESULTS AND DISCUSSION: The experimental results show that the performance of this method outperforms state-of-the-art handcrafted and learning-based methods. |
format | Online Article Text |
id | pubmed-9766349 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-97663492022-12-21 Transformer-based progressive residual network for single image dehazing Yang, Zhe Li, Xiaoling Li, Jinjiang Front Neurorobot Neuroscience INTRODUCTION: The seriously degraded fogging image affects the further visual tasks. How to obtain a fog-free image is not only challenging, but also important in computer vision. Recently, the vision transformer (ViT) architecture has achieved very efficient performance in several vision areas. METHODS: In this paper, we propose a new transformer-based progressive residual network. Different from the existing single-stage ViT architecture, we recursively call the progressive residual network with the introduction of swin transformer. Specifically, our progressive residual network consists of three main components: the recurrent block, the transformer codecs and the supervise fusion module. First, the recursive block learns the features of the input image, while connecting the original image features of the original iteration. Then, the encoder introduces the swin transformer block to encode the feature representation of the decomposed block, and continuously reduces the feature mapping resolution to extract remote context features. The decoder recursively selects and fuses image features by combining attention mechanism and dense residual blocks. In addition, we add a channel attention mechanism between codecs to focus on the importance of different features. RESULTS AND DISCUSSION: The experimental results show that the performance of this method outperforms state-of-the-art handcrafted and learning-based methods. Frontiers Media S.A. 2022-12-06 /pmc/articles/PMC9766349/ /pubmed/36561916 http://dx.doi.org/10.3389/fnbot.2022.1084543 Text en Copyright © 2022 Yang, Li and Li. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Yang, Zhe Li, Xiaoling Li, Jinjiang Transformer-based progressive residual network for single image dehazing |
title | Transformer-based progressive residual network for single image dehazing |
title_full | Transformer-based progressive residual network for single image dehazing |
title_fullStr | Transformer-based progressive residual network for single image dehazing |
title_full_unstemmed | Transformer-based progressive residual network for single image dehazing |
title_short | Transformer-based progressive residual network for single image dehazing |
title_sort | transformer-based progressive residual network for single image dehazing |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9766349/ https://www.ncbi.nlm.nih.gov/pubmed/36561916 http://dx.doi.org/10.3389/fnbot.2022.1084543 |
work_keys_str_mv | AT yangzhe transformerbasedprogressiveresidualnetworkforsingleimagedehazing AT lixiaoling transformerbasedprogressiveresidualnetworkforsingleimagedehazing AT lijinjiang transformerbasedprogressiveresidualnetworkforsingleimagedehazing |