Cargando…
A Single Stage and Single View 3D Point Cloud Reconstruction Network Based on DetNet
It is a challenging problem to infer objects with reasonable shapes and appearance from a single picture. Existing research often pays more attention to the structure of the point cloud generation network, while ignoring the feature extraction of 2D images and reducing the loss in the process of fea...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9657107/ https://www.ncbi.nlm.nih.gov/pubmed/36365932 http://dx.doi.org/10.3390/s22218235 |
_version_ | 1784829608327118848 |
---|---|
author | Li, Bin Zhu, Shiao Lu, Yi |
author_facet | Li, Bin Zhu, Shiao Lu, Yi |
author_sort | Li, Bin |
collection | PubMed |
description | It is a challenging problem to infer objects with reasonable shapes and appearance from a single picture. Existing research often pays more attention to the structure of the point cloud generation network, while ignoring the feature extraction of 2D images and reducing the loss in the process of feature propagation in the network. In this paper, a single-stage and single-view 3D point cloud reconstruction network, 3D-SSRecNet, is proposed. The proposed 3D-SSRecNet is a simple single-stage network composed of a 2D image feature extraction network and a point cloud prediction network. The single-stage network structure can reduce the loss of the extracted 2D image features. The 2D image feature extraction network takes DetNet as the backbone. DetNet can extract more details from 2D images. In order to generate point clouds with better shape and appearance, in the point cloud prediction network, the exponential linear unit (ELU) is used as the activation function, and the joint function of chamfer distance (CD) and Earth mover’s distance (EMD) is used as the loss function of 3DSSRecNet. In order to verify the effectiveness of 3D-SSRecNet, we conducted a series of experiments on ShapeNet and Pix3D datasets. The experimental results measured by CD and EMD have shown that 3D-SSRecNet outperforms the state-of-the-art reconstruction methods. |
format | Online Article Text |
id | pubmed-9657107 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-96571072022-11-15 A Single Stage and Single View 3D Point Cloud Reconstruction Network Based on DetNet Li, Bin Zhu, Shiao Lu, Yi Sensors (Basel) Article It is a challenging problem to infer objects with reasonable shapes and appearance from a single picture. Existing research often pays more attention to the structure of the point cloud generation network, while ignoring the feature extraction of 2D images and reducing the loss in the process of feature propagation in the network. In this paper, a single-stage and single-view 3D point cloud reconstruction network, 3D-SSRecNet, is proposed. The proposed 3D-SSRecNet is a simple single-stage network composed of a 2D image feature extraction network and a point cloud prediction network. The single-stage network structure can reduce the loss of the extracted 2D image features. The 2D image feature extraction network takes DetNet as the backbone. DetNet can extract more details from 2D images. In order to generate point clouds with better shape and appearance, in the point cloud prediction network, the exponential linear unit (ELU) is used as the activation function, and the joint function of chamfer distance (CD) and Earth mover’s distance (EMD) is used as the loss function of 3DSSRecNet. In order to verify the effectiveness of 3D-SSRecNet, we conducted a series of experiments on ShapeNet and Pix3D datasets. The experimental results measured by CD and EMD have shown that 3D-SSRecNet outperforms the state-of-the-art reconstruction methods. MDPI 2022-10-27 /pmc/articles/PMC9657107/ /pubmed/36365932 http://dx.doi.org/10.3390/s22218235 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Li, Bin Zhu, Shiao Lu, Yi A Single Stage and Single View 3D Point Cloud Reconstruction Network Based on DetNet |
title | A Single Stage and Single View 3D Point Cloud Reconstruction Network Based on DetNet |
title_full | A Single Stage and Single View 3D Point Cloud Reconstruction Network Based on DetNet |
title_fullStr | A Single Stage and Single View 3D Point Cloud Reconstruction Network Based on DetNet |
title_full_unstemmed | A Single Stage and Single View 3D Point Cloud Reconstruction Network Based on DetNet |
title_short | A Single Stage and Single View 3D Point Cloud Reconstruction Network Based on DetNet |
title_sort | single stage and single view 3d point cloud reconstruction network based on detnet |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9657107/ https://www.ncbi.nlm.nih.gov/pubmed/36365932 http://dx.doi.org/10.3390/s22218235 |
work_keys_str_mv | AT libin asinglestageandsingleview3dpointcloudreconstructionnetworkbasedondetnet AT zhushiao asinglestageandsingleview3dpointcloudreconstructionnetworkbasedondetnet AT luyi asinglestageandsingleview3dpointcloudreconstructionnetworkbasedondetnet AT libin singlestageandsingleview3dpointcloudreconstructionnetworkbasedondetnet AT zhushiao singlestageandsingleview3dpointcloudreconstructionnetworkbasedondetnet AT luyi singlestageandsingleview3dpointcloudreconstructionnetworkbasedondetnet |