Cargando…
Multiscale Attention Fusion for Depth Map Super-Resolution Generative Adversarial Networks
Color images have long been used as an important supplementary information to guide the super-resolution of depth maps. However, how to quantitatively measure the guiding effect of color images on depth maps has always been a neglected issue. To solve this problem, inspired by the recent excellent r...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10296949/ https://www.ncbi.nlm.nih.gov/pubmed/37372180 http://dx.doi.org/10.3390/e25060836 |
_version_ | 1785063768670076928 |
---|---|
author | Xu, Dan Fan, Xiaopeng Gao, Wen |
author_facet | Xu, Dan Fan, Xiaopeng Gao, Wen |
author_sort | Xu, Dan |
collection | PubMed |
description | Color images have long been used as an important supplementary information to guide the super-resolution of depth maps. However, how to quantitatively measure the guiding effect of color images on depth maps has always been a neglected issue. To solve this problem, inspired by the recent excellent results achieved in color image super-resolution by generative adversarial networks, we propose a depth map super-resolution framework with generative adversarial networks using multiscale attention fusion. Fusion of the color features and depth features at the same scale under the hierarchical fusion attention module effectively measure the guiding effect of the color image on the depth map. The fusion of joint color–depth features at different scales balances the impact of different scale features on the super-resolution of the depth map. The loss function of a generator composed of content loss, adversarial loss, and edge loss helps restore clearer edges of the depth map. Experimental results on different types of benchmark depth map datasets show that the proposed multiscale attention fusion based depth map super-resolution framework has significant subjective and objective improvements over the latest algorithms, verifying the validity and generalization ability of the model. |
format | Online Article Text |
id | pubmed-10296949 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-102969492023-06-28 Multiscale Attention Fusion for Depth Map Super-Resolution Generative Adversarial Networks Xu, Dan Fan, Xiaopeng Gao, Wen Entropy (Basel) Article Color images have long been used as an important supplementary information to guide the super-resolution of depth maps. However, how to quantitatively measure the guiding effect of color images on depth maps has always been a neglected issue. To solve this problem, inspired by the recent excellent results achieved in color image super-resolution by generative adversarial networks, we propose a depth map super-resolution framework with generative adversarial networks using multiscale attention fusion. Fusion of the color features and depth features at the same scale under the hierarchical fusion attention module effectively measure the guiding effect of the color image on the depth map. The fusion of joint color–depth features at different scales balances the impact of different scale features on the super-resolution of the depth map. The loss function of a generator composed of content loss, adversarial loss, and edge loss helps restore clearer edges of the depth map. Experimental results on different types of benchmark depth map datasets show that the proposed multiscale attention fusion based depth map super-resolution framework has significant subjective and objective improvements over the latest algorithms, verifying the validity and generalization ability of the model. MDPI 2023-05-23 /pmc/articles/PMC10296949/ /pubmed/37372180 http://dx.doi.org/10.3390/e25060836 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Xu, Dan Fan, Xiaopeng Gao, Wen Multiscale Attention Fusion for Depth Map Super-Resolution Generative Adversarial Networks |
title | Multiscale Attention Fusion for Depth Map Super-Resolution Generative Adversarial Networks |
title_full | Multiscale Attention Fusion for Depth Map Super-Resolution Generative Adversarial Networks |
title_fullStr | Multiscale Attention Fusion for Depth Map Super-Resolution Generative Adversarial Networks |
title_full_unstemmed | Multiscale Attention Fusion for Depth Map Super-Resolution Generative Adversarial Networks |
title_short | Multiscale Attention Fusion for Depth Map Super-Resolution Generative Adversarial Networks |
title_sort | multiscale attention fusion for depth map super-resolution generative adversarial networks |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10296949/ https://www.ncbi.nlm.nih.gov/pubmed/37372180 http://dx.doi.org/10.3390/e25060836 |
work_keys_str_mv | AT xudan multiscaleattentionfusionfordepthmapsuperresolutiongenerativeadversarialnetworks AT fanxiaopeng multiscaleattentionfusionfordepthmapsuperresolutiongenerativeadversarialnetworks AT gaowen multiscaleattentionfusionfordepthmapsuperresolutiongenerativeadversarialnetworks |