Cargando…

Parallel Structure from Motion for Sparse Point Cloud Generation in Large-Scale Scenes

Scene reconstruction uses images or videos as input to reconstruct a 3D model of a real scene and has important applications in smart cities, surveying and mapping, military, and other fields. Structure from motion (SFM) is a key step in scene reconstruction, which recovers sparse point clouds from...

Descripción completa

Detalles Bibliográficos
Autores principales: Bao, Yongtang, Lin, Pengfei, Li, Yao, Qi, Yue, Wang, Zhihui, Du, Wenxiang, Fan, Qing
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8201245/
https://www.ncbi.nlm.nih.gov/pubmed/34200488
http://dx.doi.org/10.3390/s21113939
_version_ 1783707773218848768
author Bao, Yongtang
Lin, Pengfei
Li, Yao
Qi, Yue
Wang, Zhihui
Du, Wenxiang
Fan, Qing
author_facet Bao, Yongtang
Lin, Pengfei
Li, Yao
Qi, Yue
Wang, Zhihui
Du, Wenxiang
Fan, Qing
author_sort Bao, Yongtang
collection PubMed
description Scene reconstruction uses images or videos as input to reconstruct a 3D model of a real scene and has important applications in smart cities, surveying and mapping, military, and other fields. Structure from motion (SFM) is a key step in scene reconstruction, which recovers sparse point clouds from image sequences. However, large-scale scenes cannot be reconstructed using a single compute node. Image matching and geometric filtering take up a lot of time in the traditional SFM problem. In this paper, we propose a novel divide-and-conquer framework to solve the distributed SFM problem. First, we use the global navigation satellite system (GNSS) information from images to calculate the GNSS neighborhood. The number of images matched is greatly reduced by matching each image to only valid GNSS neighbors. This way, a robust matching relationship can be obtained. Second, the calculated matching relationship is used as the initial camera graph, which is divided into multiple subgraphs by the clustering algorithm. The local SFM is executed on several computing nodes to register the local cameras. Finally, all of the local camera poses are integrated and optimized to complete the global camera registration. Experiments show that our system can accurately and efficiently solve the structure from motion problem in large-scale scenes.
format Online
Article
Text
id pubmed-8201245
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-82012452021-06-15 Parallel Structure from Motion for Sparse Point Cloud Generation in Large-Scale Scenes Bao, Yongtang Lin, Pengfei Li, Yao Qi, Yue Wang, Zhihui Du, Wenxiang Fan, Qing Sensors (Basel) Article Scene reconstruction uses images or videos as input to reconstruct a 3D model of a real scene and has important applications in smart cities, surveying and mapping, military, and other fields. Structure from motion (SFM) is a key step in scene reconstruction, which recovers sparse point clouds from image sequences. However, large-scale scenes cannot be reconstructed using a single compute node. Image matching and geometric filtering take up a lot of time in the traditional SFM problem. In this paper, we propose a novel divide-and-conquer framework to solve the distributed SFM problem. First, we use the global navigation satellite system (GNSS) information from images to calculate the GNSS neighborhood. The number of images matched is greatly reduced by matching each image to only valid GNSS neighbors. This way, a robust matching relationship can be obtained. Second, the calculated matching relationship is used as the initial camera graph, which is divided into multiple subgraphs by the clustering algorithm. The local SFM is executed on several computing nodes to register the local cameras. Finally, all of the local camera poses are integrated and optimized to complete the global camera registration. Experiments show that our system can accurately and efficiently solve the structure from motion problem in large-scale scenes. MDPI 2021-06-07 /pmc/articles/PMC8201245/ /pubmed/34200488 http://dx.doi.org/10.3390/s21113939 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Bao, Yongtang
Lin, Pengfei
Li, Yao
Qi, Yue
Wang, Zhihui
Du, Wenxiang
Fan, Qing
Parallel Structure from Motion for Sparse Point Cloud Generation in Large-Scale Scenes
title Parallel Structure from Motion for Sparse Point Cloud Generation in Large-Scale Scenes
title_full Parallel Structure from Motion for Sparse Point Cloud Generation in Large-Scale Scenes
title_fullStr Parallel Structure from Motion for Sparse Point Cloud Generation in Large-Scale Scenes
title_full_unstemmed Parallel Structure from Motion for Sparse Point Cloud Generation in Large-Scale Scenes
title_short Parallel Structure from Motion for Sparse Point Cloud Generation in Large-Scale Scenes
title_sort parallel structure from motion for sparse point cloud generation in large-scale scenes
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8201245/
https://www.ncbi.nlm.nih.gov/pubmed/34200488
http://dx.doi.org/10.3390/s21113939
work_keys_str_mv AT baoyongtang parallelstructurefrommotionforsparsepointcloudgenerationinlargescalescenes
AT linpengfei parallelstructurefrommotionforsparsepointcloudgenerationinlargescalescenes
AT liyao parallelstructurefrommotionforsparsepointcloudgenerationinlargescalescenes
AT qiyue parallelstructurefrommotionforsparsepointcloudgenerationinlargescalescenes
AT wangzhihui parallelstructurefrommotionforsparsepointcloudgenerationinlargescalescenes
AT duwenxiang parallelstructurefrommotionforsparsepointcloudgenerationinlargescalescenes
AT fanqing parallelstructurefrommotionforsparsepointcloudgenerationinlargescalescenes