Cargando…

End-to-End Residual Network for Light Field Reconstruction on Raw Images and View Image Stacks

Light field (LF) technology has become a focus of great interest (due to its use in many applications), especially since the introduction of the consumer LF camera, which facilitated the acquisition of dense LF images. Obtaining densely sampled LF images is costly due to the trade-off between spatia...

Descripción completa

Detalles Bibliográficos
Autores principales: Salem, Ahmed, Ibrahem, Hatem, Yagoub, Bilel, Kang, Hyun-Soo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9104814/
https://www.ncbi.nlm.nih.gov/pubmed/35591229
http://dx.doi.org/10.3390/s22093540
_version_ 1784707886790737920
author Salem, Ahmed
Ibrahem, Hatem
Yagoub, Bilel
Kang, Hyun-Soo
author_facet Salem, Ahmed
Ibrahem, Hatem
Yagoub, Bilel
Kang, Hyun-Soo
author_sort Salem, Ahmed
collection PubMed
description Light field (LF) technology has become a focus of great interest (due to its use in many applications), especially since the introduction of the consumer LF camera, which facilitated the acquisition of dense LF images. Obtaining densely sampled LF images is costly due to the trade-off between spatial and angular resolutions. Accordingly, in this research, we suggest a learning-based solution to this challenging problem, reconstructing dense, high-quality LF images. Instead of training our model with several images of the same scene, we used raw LF images (lenslet images). The raw LF format enables the encoding of several images of the same scene into one image. Consequently, it helps the network to understand and simulate the relationship between different images, resulting in higher quality images. We divided our model into two successive modules: LFR and LF augmentation (LFA). Each module is represented using a convolutional neural network-based residual network (CNN). We trained our network to lessen the absolute error between the novel and reference views. Experimental findings on real-world datasets show that our suggested method has excellent performance and superiority over state-of-the-art approaches.
format Online
Article
Text
id pubmed-9104814
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-91048142022-05-14 End-to-End Residual Network for Light Field Reconstruction on Raw Images and View Image Stacks Salem, Ahmed Ibrahem, Hatem Yagoub, Bilel Kang, Hyun-Soo Sensors (Basel) Article Light field (LF) technology has become a focus of great interest (due to its use in many applications), especially since the introduction of the consumer LF camera, which facilitated the acquisition of dense LF images. Obtaining densely sampled LF images is costly due to the trade-off between spatial and angular resolutions. Accordingly, in this research, we suggest a learning-based solution to this challenging problem, reconstructing dense, high-quality LF images. Instead of training our model with several images of the same scene, we used raw LF images (lenslet images). The raw LF format enables the encoding of several images of the same scene into one image. Consequently, it helps the network to understand and simulate the relationship between different images, resulting in higher quality images. We divided our model into two successive modules: LFR and LF augmentation (LFA). Each module is represented using a convolutional neural network-based residual network (CNN). We trained our network to lessen the absolute error between the novel and reference views. Experimental findings on real-world datasets show that our suggested method has excellent performance and superiority over state-of-the-art approaches. MDPI 2022-05-06 /pmc/articles/PMC9104814/ /pubmed/35591229 http://dx.doi.org/10.3390/s22093540 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Salem, Ahmed
Ibrahem, Hatem
Yagoub, Bilel
Kang, Hyun-Soo
End-to-End Residual Network for Light Field Reconstruction on Raw Images and View Image Stacks
title End-to-End Residual Network for Light Field Reconstruction on Raw Images and View Image Stacks
title_full End-to-End Residual Network for Light Field Reconstruction on Raw Images and View Image Stacks
title_fullStr End-to-End Residual Network for Light Field Reconstruction on Raw Images and View Image Stacks
title_full_unstemmed End-to-End Residual Network for Light Field Reconstruction on Raw Images and View Image Stacks
title_short End-to-End Residual Network for Light Field Reconstruction on Raw Images and View Image Stacks
title_sort end-to-end residual network for light field reconstruction on raw images and view image stacks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9104814/
https://www.ncbi.nlm.nih.gov/pubmed/35591229
http://dx.doi.org/10.3390/s22093540
work_keys_str_mv AT salemahmed endtoendresidualnetworkforlightfieldreconstructiononrawimagesandviewimagestacks
AT ibrahemhatem endtoendresidualnetworkforlightfieldreconstructiononrawimagesandviewimagestacks
AT yagoubbilel endtoendresidualnetworkforlightfieldreconstructiononrawimagesandviewimagestacks
AT kanghyunsoo endtoendresidualnetworkforlightfieldreconstructiononrawimagesandviewimagestacks