Cargando…
A novel no-sensors 3D model reconstruction from monocular video frames for a dynamic environment
Occlusion awareness is one of the most challenging problems in several fields such as multimedia, remote sensing, computer vision, and computer graphics. Realistic interaction applications are suffering from dealing with occlusion and collision problems in a dynamic environment. Creating dense 3D re...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
PeerJ Inc.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8157153/ https://www.ncbi.nlm.nih.gov/pubmed/34084931 http://dx.doi.org/10.7717/peerj-cs.529 |
_version_ | 1783699617654767616 |
---|---|
author | Fathy, Ghada M. Hassan, Hanan A. Sheta, Walaa Omara, Fatma A. Nabil, Emad |
author_facet | Fathy, Ghada M. Hassan, Hanan A. Sheta, Walaa Omara, Fatma A. Nabil, Emad |
author_sort | Fathy, Ghada M. |
collection | PubMed |
description | Occlusion awareness is one of the most challenging problems in several fields such as multimedia, remote sensing, computer vision, and computer graphics. Realistic interaction applications are suffering from dealing with occlusion and collision problems in a dynamic environment. Creating dense 3D reconstruction methods is the best solution to solve this issue. However, these methods have poor performance in practical applications due to the absence of accurate depth, camera pose, and object motion.This paper proposes a new framework that builds a full 3D model reconstruction that overcomes the occlusion problem in a complex dynamic scene without using sensors’ data. Popular devices such as a monocular camera are used to generate a suitable model for video streaming applications. The main objective is to create a smooth and accurate 3D point-cloud for a dynamic environment using cumulative information of a sequence of RGB video frames. The framework is composed of two main phases. The first uses an unsupervised learning technique to predict scene depth, camera pose, and objects’ motion from RGB monocular videos. The second generates a frame-wise point cloud fusion to reconstruct a 3D model based on a video frame sequence. Several evaluation metrics are measured: Localization error, RMSE, and fitness between ground truth (KITTI’s sparse LiDAR points) and predicted point-cloud. Moreover, we compared the framework with different widely used state-of-the-art evaluation methods such as MRE and Chamfer Distance. Experimental results showed that the proposed framework surpassed the other methods and proved to be a powerful candidate in 3D model reconstruction. |
format | Online Article Text |
id | pubmed-8157153 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | PeerJ Inc. |
record_format | MEDLINE/PubMed |
spelling | pubmed-81571532021-06-02 A novel no-sensors 3D model reconstruction from monocular video frames for a dynamic environment Fathy, Ghada M. Hassan, Hanan A. Sheta, Walaa Omara, Fatma A. Nabil, Emad PeerJ Comput Sci Artificial Intelligence Occlusion awareness is one of the most challenging problems in several fields such as multimedia, remote sensing, computer vision, and computer graphics. Realistic interaction applications are suffering from dealing with occlusion and collision problems in a dynamic environment. Creating dense 3D reconstruction methods is the best solution to solve this issue. However, these methods have poor performance in practical applications due to the absence of accurate depth, camera pose, and object motion.This paper proposes a new framework that builds a full 3D model reconstruction that overcomes the occlusion problem in a complex dynamic scene without using sensors’ data. Popular devices such as a monocular camera are used to generate a suitable model for video streaming applications. The main objective is to create a smooth and accurate 3D point-cloud for a dynamic environment using cumulative information of a sequence of RGB video frames. The framework is composed of two main phases. The first uses an unsupervised learning technique to predict scene depth, camera pose, and objects’ motion from RGB monocular videos. The second generates a frame-wise point cloud fusion to reconstruct a 3D model based on a video frame sequence. Several evaluation metrics are measured: Localization error, RMSE, and fitness between ground truth (KITTI’s sparse LiDAR points) and predicted point-cloud. Moreover, we compared the framework with different widely used state-of-the-art evaluation methods such as MRE and Chamfer Distance. Experimental results showed that the proposed framework surpassed the other methods and proved to be a powerful candidate in 3D model reconstruction. PeerJ Inc. 2021-05-12 /pmc/articles/PMC8157153/ /pubmed/34084931 http://dx.doi.org/10.7717/peerj-cs.529 Text en ©2021 Fathy et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited. |
spellingShingle | Artificial Intelligence Fathy, Ghada M. Hassan, Hanan A. Sheta, Walaa Omara, Fatma A. Nabil, Emad A novel no-sensors 3D model reconstruction from monocular video frames for a dynamic environment |
title | A novel no-sensors 3D model reconstruction from monocular video frames for a dynamic environment |
title_full | A novel no-sensors 3D model reconstruction from monocular video frames for a dynamic environment |
title_fullStr | A novel no-sensors 3D model reconstruction from monocular video frames for a dynamic environment |
title_full_unstemmed | A novel no-sensors 3D model reconstruction from monocular video frames for a dynamic environment |
title_short | A novel no-sensors 3D model reconstruction from monocular video frames for a dynamic environment |
title_sort | novel no-sensors 3d model reconstruction from monocular video frames for a dynamic environment |
topic | Artificial Intelligence |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8157153/ https://www.ncbi.nlm.nih.gov/pubmed/34084931 http://dx.doi.org/10.7717/peerj-cs.529 |
work_keys_str_mv | AT fathyghadam anovelnosensors3dmodelreconstructionfrommonocularvideoframesforadynamicenvironment AT hassanhanana anovelnosensors3dmodelreconstructionfrommonocularvideoframesforadynamicenvironment AT shetawalaa anovelnosensors3dmodelreconstructionfrommonocularvideoframesforadynamicenvironment AT omarafatmaa anovelnosensors3dmodelreconstructionfrommonocularvideoframesforadynamicenvironment AT nabilemad anovelnosensors3dmodelreconstructionfrommonocularvideoframesforadynamicenvironment AT fathyghadam novelnosensors3dmodelreconstructionfrommonocularvideoframesforadynamicenvironment AT hassanhanana novelnosensors3dmodelreconstructionfrommonocularvideoframesforadynamicenvironment AT shetawalaa novelnosensors3dmodelreconstructionfrommonocularvideoframesforadynamicenvironment AT omarafatmaa novelnosensors3dmodelreconstructionfrommonocularvideoframesforadynamicenvironment AT nabilemad novelnosensors3dmodelreconstructionfrommonocularvideoframesforadynamicenvironment |