Cargando…
Unsupervised 3D Reconstruction with Multi-Measure and High-Resolution Loss
Multi-view 3D reconstruction technology based on deep learning is developing rapidly. Unsupervised learning has become a research hotspot because it does not need ground truth labels. The current unsupervised method mainly uses 3DCNN to regularize the cost volume to regression image depth. This appr...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9824241/ https://www.ncbi.nlm.nih.gov/pubmed/36616737 http://dx.doi.org/10.3390/s23010136 |
_version_ | 1784866361421332480 |
---|---|
author | Zheng, Yijie Luo, Jianxin Chen, Weiwei Zhang, Yanyan Sun, Haixun Pan, Zhisong |
author_facet | Zheng, Yijie Luo, Jianxin Chen, Weiwei Zhang, Yanyan Sun, Haixun Pan, Zhisong |
author_sort | Zheng, Yijie |
collection | PubMed |
description | Multi-view 3D reconstruction technology based on deep learning is developing rapidly. Unsupervised learning has become a research hotspot because it does not need ground truth labels. The current unsupervised method mainly uses 3DCNN to regularize the cost volume to regression image depth. This approach results in high memory requirements and long computing time. In this paper, we propose an end-to-end unsupervised multi-view 3D reconstruction network framework based on PatchMatch, Unsup_patchmatchnet. It dramatically reduces memory requirements and computing time. We propose a feature point consistency loss function. We incorporate various self-supervised signals such as photometric consistency loss and semantic consistency loss into the loss function. At the same time, we propose a high-resolution loss method. This improves the reconstruction of high-resolution images. The experiment proves that the memory usage of the network is reduced by 80% and the running time is reduced by more than 50% compared with the network using 3DCNN method. The overall error of reconstructed 3D point cloud is only 0.501 mm. It is superior to most current unsupervised multi-view 3D reconstruction networks. Then, we test on different data sets and verify that the network has good generalization. |
format | Online Article Text |
id | pubmed-9824241 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-98242412023-01-08 Unsupervised 3D Reconstruction with Multi-Measure and High-Resolution Loss Zheng, Yijie Luo, Jianxin Chen, Weiwei Zhang, Yanyan Sun, Haixun Pan, Zhisong Sensors (Basel) Article Multi-view 3D reconstruction technology based on deep learning is developing rapidly. Unsupervised learning has become a research hotspot because it does not need ground truth labels. The current unsupervised method mainly uses 3DCNN to regularize the cost volume to regression image depth. This approach results in high memory requirements and long computing time. In this paper, we propose an end-to-end unsupervised multi-view 3D reconstruction network framework based on PatchMatch, Unsup_patchmatchnet. It dramatically reduces memory requirements and computing time. We propose a feature point consistency loss function. We incorporate various self-supervised signals such as photometric consistency loss and semantic consistency loss into the loss function. At the same time, we propose a high-resolution loss method. This improves the reconstruction of high-resolution images. The experiment proves that the memory usage of the network is reduced by 80% and the running time is reduced by more than 50% compared with the network using 3DCNN method. The overall error of reconstructed 3D point cloud is only 0.501 mm. It is superior to most current unsupervised multi-view 3D reconstruction networks. Then, we test on different data sets and verify that the network has good generalization. MDPI 2022-12-23 /pmc/articles/PMC9824241/ /pubmed/36616737 http://dx.doi.org/10.3390/s23010136 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zheng, Yijie Luo, Jianxin Chen, Weiwei Zhang, Yanyan Sun, Haixun Pan, Zhisong Unsupervised 3D Reconstruction with Multi-Measure and High-Resolution Loss |
title | Unsupervised 3D Reconstruction with Multi-Measure and High-Resolution Loss |
title_full | Unsupervised 3D Reconstruction with Multi-Measure and High-Resolution Loss |
title_fullStr | Unsupervised 3D Reconstruction with Multi-Measure and High-Resolution Loss |
title_full_unstemmed | Unsupervised 3D Reconstruction with Multi-Measure and High-Resolution Loss |
title_short | Unsupervised 3D Reconstruction with Multi-Measure and High-Resolution Loss |
title_sort | unsupervised 3d reconstruction with multi-measure and high-resolution loss |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9824241/ https://www.ncbi.nlm.nih.gov/pubmed/36616737 http://dx.doi.org/10.3390/s23010136 |
work_keys_str_mv | AT zhengyijie unsupervised3dreconstructionwithmultimeasureandhighresolutionloss AT luojianxin unsupervised3dreconstructionwithmultimeasureandhighresolutionloss AT chenweiwei unsupervised3dreconstructionwithmultimeasureandhighresolutionloss AT zhangyanyan unsupervised3dreconstructionwithmultimeasureandhighresolutionloss AT sunhaixun unsupervised3dreconstructionwithmultimeasureandhighresolutionloss AT panzhisong unsupervised3dreconstructionwithmultimeasureandhighresolutionloss |