Cargando…

Colored Point Cloud Completion for a Head Using Adversarial Rendered Image Loss

Recent advances in depth measurement and its utilization have made point cloud processing more critical. Additionally, the human head is essential for communication, and its three-dimensional data are expected to be utilized in this regard. However, a single RGB-Depth (RGBD) camera is prone to occlu...

Descripción completa

Detalles Bibliográficos
Autores principales: Ishida, Yuki, Manabe, Yoshitsugu, Yata, Noriko
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9147062/
https://www.ncbi.nlm.nih.gov/pubmed/35621889
http://dx.doi.org/10.3390/jimaging8050125
_version_ 1784716715964235776
author Ishida, Yuki
Manabe, Yoshitsugu
Yata, Noriko
author_facet Ishida, Yuki
Manabe, Yoshitsugu
Yata, Noriko
author_sort Ishida, Yuki
collection PubMed
description Recent advances in depth measurement and its utilization have made point cloud processing more critical. Additionally, the human head is essential for communication, and its three-dimensional data are expected to be utilized in this regard. However, a single RGB-Depth (RGBD) camera is prone to occlusion and depth measurement failure for dark hair colors such as black hair. Recently, point cloud completion, where an entire point cloud is estimated and generated from a partial point cloud, has been studied, but only the shape is learned, rather than the completion of colored point clouds. Thus, this paper proposes a machine learning-based completion method for colored point clouds with XYZ location information and the International Commission on Illumination (CIE) LAB ([Formula: see text]) color information. The proposed method uses the color difference between point clouds based on the Chamfer Distance (CD) or Earth Mover’s Distance (EMD) of point cloud shape evaluation as a color loss. In addition, an adversarial loss to [Formula: see text] images rendered from the output point cloud can improve the visual quality. The experiments examined networks trained using a colored point cloud dataset created by combining two 3D datasets: hairstyles and faces. Experimental results show that using the adversarial loss with the colored point cloud renderer in the proposed method improves the image domain’s evaluation.
format Online
Article
Text
id pubmed-9147062
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-91470622022-05-29 Colored Point Cloud Completion for a Head Using Adversarial Rendered Image Loss Ishida, Yuki Manabe, Yoshitsugu Yata, Noriko J Imaging Article Recent advances in depth measurement and its utilization have made point cloud processing more critical. Additionally, the human head is essential for communication, and its three-dimensional data are expected to be utilized in this regard. However, a single RGB-Depth (RGBD) camera is prone to occlusion and depth measurement failure for dark hair colors such as black hair. Recently, point cloud completion, where an entire point cloud is estimated and generated from a partial point cloud, has been studied, but only the shape is learned, rather than the completion of colored point clouds. Thus, this paper proposes a machine learning-based completion method for colored point clouds with XYZ location information and the International Commission on Illumination (CIE) LAB ([Formula: see text]) color information. The proposed method uses the color difference between point clouds based on the Chamfer Distance (CD) or Earth Mover’s Distance (EMD) of point cloud shape evaluation as a color loss. In addition, an adversarial loss to [Formula: see text] images rendered from the output point cloud can improve the visual quality. The experiments examined networks trained using a colored point cloud dataset created by combining two 3D datasets: hairstyles and faces. Experimental results show that using the adversarial loss with the colored point cloud renderer in the proposed method improves the image domain’s evaluation. MDPI 2022-04-26 /pmc/articles/PMC9147062/ /pubmed/35621889 http://dx.doi.org/10.3390/jimaging8050125 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Ishida, Yuki
Manabe, Yoshitsugu
Yata, Noriko
Colored Point Cloud Completion for a Head Using Adversarial Rendered Image Loss
title Colored Point Cloud Completion for a Head Using Adversarial Rendered Image Loss
title_full Colored Point Cloud Completion for a Head Using Adversarial Rendered Image Loss
title_fullStr Colored Point Cloud Completion for a Head Using Adversarial Rendered Image Loss
title_full_unstemmed Colored Point Cloud Completion for a Head Using Adversarial Rendered Image Loss
title_short Colored Point Cloud Completion for a Head Using Adversarial Rendered Image Loss
title_sort colored point cloud completion for a head using adversarial rendered image loss
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9147062/
https://www.ncbi.nlm.nih.gov/pubmed/35621889
http://dx.doi.org/10.3390/jimaging8050125
work_keys_str_mv AT ishidayuki coloredpointcloudcompletionforaheadusingadversarialrenderedimageloss
AT manabeyoshitsugu coloredpointcloudcompletionforaheadusingadversarialrenderedimageloss
AT yatanoriko coloredpointcloudcompletionforaheadusingadversarialrenderedimageloss