Cargando…

Point cloud completion in challenging indoor scenarios with human motion

Combining and completing point cloud data from two or more sensors with arbitrarily relative perspectives in a dynamic, cluttered, and complex environment is challenging, especially when the two sensors have significant perspective differences while the large overlap ratio and feature-rich scene can...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Chengsi, Czarnuch, Stephen
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10209708/
https://www.ncbi.nlm.nih.gov/pubmed/37251352
http://dx.doi.org/10.3389/frobt.2023.1184614
_version_ 1785046934397911040
author Zhang, Chengsi
Czarnuch, Stephen
author_facet Zhang, Chengsi
Czarnuch, Stephen
author_sort Zhang, Chengsi
collection PubMed
description Combining and completing point cloud data from two or more sensors with arbitrarily relative perspectives in a dynamic, cluttered, and complex environment is challenging, especially when the two sensors have significant perspective differences while the large overlap ratio and feature-rich scene cannot be guaranteed. We create a novel approach targeting this challenging scenario by registering two camera captures in a time series with unknown perspectives and human movements to easily use our system in a real-life scene. In our approach, we first reduce the six unknowns of 3D point cloud completion to three by aligning the ground planes found by our previous perspective-independent 3D ground plane estimation algorithm. Subsequently, we use a histogram-based approach to identify and extract all the humans from each frame generating a three-dimensional (3D) human walking sequence in a time series. To enhance accuracy and performance, we convert 3D human walking sequences to lines by calculating the center of mass (CoM) point of each human body and connecting them. Finally, we match the walking paths in different data trials by minimizing the Fréchet distance between two walking paths and using 2D iterative closest point (ICP) to find the remaining three unknowns in the overall transformation matrix for the final alignment. Using this approach, we can successfully register the corresponding walking path of the human between the two cameras’ captures and estimate the transformation matrix between the two sensors.
format Online
Article
Text
id pubmed-10209708
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-102097082023-05-26 Point cloud completion in challenging indoor scenarios with human motion Zhang, Chengsi Czarnuch, Stephen Front Robot AI Robotics and AI Combining and completing point cloud data from two or more sensors with arbitrarily relative perspectives in a dynamic, cluttered, and complex environment is challenging, especially when the two sensors have significant perspective differences while the large overlap ratio and feature-rich scene cannot be guaranteed. We create a novel approach targeting this challenging scenario by registering two camera captures in a time series with unknown perspectives and human movements to easily use our system in a real-life scene. In our approach, we first reduce the six unknowns of 3D point cloud completion to three by aligning the ground planes found by our previous perspective-independent 3D ground plane estimation algorithm. Subsequently, we use a histogram-based approach to identify and extract all the humans from each frame generating a three-dimensional (3D) human walking sequence in a time series. To enhance accuracy and performance, we convert 3D human walking sequences to lines by calculating the center of mass (CoM) point of each human body and connecting them. Finally, we match the walking paths in different data trials by minimizing the Fréchet distance between two walking paths and using 2D iterative closest point (ICP) to find the remaining three unknowns in the overall transformation matrix for the final alignment. Using this approach, we can successfully register the corresponding walking path of the human between the two cameras’ captures and estimate the transformation matrix between the two sensors. Frontiers Media S.A. 2023-05-10 /pmc/articles/PMC10209708/ /pubmed/37251352 http://dx.doi.org/10.3389/frobt.2023.1184614 Text en Copyright © 2023 Zhang and Czarnuch. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Zhang, Chengsi
Czarnuch, Stephen
Point cloud completion in challenging indoor scenarios with human motion
title Point cloud completion in challenging indoor scenarios with human motion
title_full Point cloud completion in challenging indoor scenarios with human motion
title_fullStr Point cloud completion in challenging indoor scenarios with human motion
title_full_unstemmed Point cloud completion in challenging indoor scenarios with human motion
title_short Point cloud completion in challenging indoor scenarios with human motion
title_sort point cloud completion in challenging indoor scenarios with human motion
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10209708/
https://www.ncbi.nlm.nih.gov/pubmed/37251352
http://dx.doi.org/10.3389/frobt.2023.1184614
work_keys_str_mv AT zhangchengsi pointcloudcompletioninchallengingindoorscenarioswithhumanmotion
AT czarnuchstephen pointcloudcompletioninchallengingindoorscenarioswithhumanmotion