Cargando…

FPattNet: A Multi-Scale Feature Fusion Network with Occlusion Awareness for Depth Estimation of Light Field Images

A light field camera can capture light information from various directions within a scene, allowing for the reconstruction of the scene. The light field image inherently contains the depth information of the scene, and depth estimations of light field images have become a popular research topic. Thi...

Descripción completa

Detalles Bibliográficos
Autores principales: Xiao, Min, Lv, Chen, Liu, Xiaomin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10490666/
https://www.ncbi.nlm.nih.gov/pubmed/37687936
http://dx.doi.org/10.3390/s23177480
_version_ 1785103893176254464
author Xiao, Min
Lv, Chen
Liu, Xiaomin
author_facet Xiao, Min
Lv, Chen
Liu, Xiaomin
author_sort Xiao, Min
collection PubMed
description A light field camera can capture light information from various directions within a scene, allowing for the reconstruction of the scene. The light field image inherently contains the depth information of the scene, and depth estimations of light field images have become a popular research topic. This paper proposes a depth estimation network of light field images with occlusion awareness. Since light field images contain many views from different viewpoints, identifying the combinations that contribute the most to the depth estimation of the center view is critical to improving the depth estimation accuracy. Current methods typically rely on a fixed set of views, such as vertical, horizontal, and diagonal, which may not be optimal for all scenes. To address this limitation, we propose a novel approach that considers all available views during depth estimation while leveraging an attention mechanism to assign weights to each view dynamically. By inputting all views into the network and employing the attention mechanism, we enable the model to adaptively determine the most informative views for each scene, thus achieving more accurate depth estimation. Furthermore, we introduce a multi-scale feature fusion strategy that amalgamates contextual information and expands the receptive field to enhance the network’s performance in handling challenging scenarios, such as textureless and occluded regions.
format Online
Article
Text
id pubmed-10490666
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-104906662023-09-09 FPattNet: A Multi-Scale Feature Fusion Network with Occlusion Awareness for Depth Estimation of Light Field Images Xiao, Min Lv, Chen Liu, Xiaomin Sensors (Basel) Article A light field camera can capture light information from various directions within a scene, allowing for the reconstruction of the scene. The light field image inherently contains the depth information of the scene, and depth estimations of light field images have become a popular research topic. This paper proposes a depth estimation network of light field images with occlusion awareness. Since light field images contain many views from different viewpoints, identifying the combinations that contribute the most to the depth estimation of the center view is critical to improving the depth estimation accuracy. Current methods typically rely on a fixed set of views, such as vertical, horizontal, and diagonal, which may not be optimal for all scenes. To address this limitation, we propose a novel approach that considers all available views during depth estimation while leveraging an attention mechanism to assign weights to each view dynamically. By inputting all views into the network and employing the attention mechanism, we enable the model to adaptively determine the most informative views for each scene, thus achieving more accurate depth estimation. Furthermore, we introduce a multi-scale feature fusion strategy that amalgamates contextual information and expands the receptive field to enhance the network’s performance in handling challenging scenarios, such as textureless and occluded regions. MDPI 2023-08-28 /pmc/articles/PMC10490666/ /pubmed/37687936 http://dx.doi.org/10.3390/s23177480 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Xiao, Min
Lv, Chen
Liu, Xiaomin
FPattNet: A Multi-Scale Feature Fusion Network with Occlusion Awareness for Depth Estimation of Light Field Images
title FPattNet: A Multi-Scale Feature Fusion Network with Occlusion Awareness for Depth Estimation of Light Field Images
title_full FPattNet: A Multi-Scale Feature Fusion Network with Occlusion Awareness for Depth Estimation of Light Field Images
title_fullStr FPattNet: A Multi-Scale Feature Fusion Network with Occlusion Awareness for Depth Estimation of Light Field Images
title_full_unstemmed FPattNet: A Multi-Scale Feature Fusion Network with Occlusion Awareness for Depth Estimation of Light Field Images
title_short FPattNet: A Multi-Scale Feature Fusion Network with Occlusion Awareness for Depth Estimation of Light Field Images
title_sort fpattnet: a multi-scale feature fusion network with occlusion awareness for depth estimation of light field images
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10490666/
https://www.ncbi.nlm.nih.gov/pubmed/37687936
http://dx.doi.org/10.3390/s23177480
work_keys_str_mv AT xiaomin fpattnetamultiscalefeaturefusionnetworkwithocclusionawarenessfordepthestimationoflightfieldimages
AT lvchen fpattnetamultiscalefeaturefusionnetworkwithocclusionawarenessfordepthestimationoflightfieldimages
AT liuxiaomin fpattnetamultiscalefeaturefusionnetworkwithocclusionawarenessfordepthestimationoflightfieldimages