Cargando…

What makes the unsupervised monocular depth estimation (UMDE) model training better

Current computer vision tasks based on deep learning require a huge amount of data with annotations for model training or testing, especially in some dense estimation tasks, such as optical flow segmentation and depth estimation. In practice, manual labeling for dense estimation tasks is very diffic...

Descripción completa

Detalles Bibliográficos
Autores principales: Wang, Xiangtong, Liang, Binbin, Yang, Menglong, Li, Wei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9768171/
https://www.ncbi.nlm.nih.gov/pubmed/36539595
http://dx.doi.org/10.1038/s41598-022-26613-0
_version_ 1784854107796799488
author Wang, Xiangtong
Liang, Binbin
Yang, Menglong
Li, Wei
author_facet Wang, Xiangtong
Liang, Binbin
Yang, Menglong
Li, Wei
author_sort Wang, Xiangtong
collection PubMed
description Current computer vision tasks based on deep learning require a huge amount of data with annotations for model training or testing, especially in some dense estimation tasks, such as optical flow segmentation and depth estimation. In practice, manual labeling for dense estimation tasks is very difficult or even impossible, and the scenes of the dataset are often restricted to a small range, which dramatically limits the development of the community. To overcome this deficiency, we propose a synthetic dataset generation method to obtain the expandable dataset without burdensome manual workforce. By this method, we construct a dataset called MineNavi containing video footages from first-perspective-view of the aircraft matched with accurate ground truth for depth estimation in aircraft navigation application. We also provide quantitative experiments to prove that pre-training via our MineNavi dataset can improve the performance of depth estimation model and speed up the convergence of the model on real scene data. Since the synthetic dataset has a similar effect to the real-world dataset in the training process of deep model, we finally conduct the experiments on MineNavi with unsupervised monocular depth estimation (UMDE) deep learning models to demonstrate the impact of various factors in our dataset such as lighting conditions and motion mode, aiming to explore what makes this kind of models training better.
format Online
Article
Text
id pubmed-9768171
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-97681712022-12-22 What makes the unsupervised monocular depth estimation (UMDE) model training better Wang, Xiangtong Liang, Binbin Yang, Menglong Li, Wei Sci Rep Article Current computer vision tasks based on deep learning require a huge amount of data with annotations for model training or testing, especially in some dense estimation tasks, such as optical flow segmentation and depth estimation. In practice, manual labeling for dense estimation tasks is very difficult or even impossible, and the scenes of the dataset are often restricted to a small range, which dramatically limits the development of the community. To overcome this deficiency, we propose a synthetic dataset generation method to obtain the expandable dataset without burdensome manual workforce. By this method, we construct a dataset called MineNavi containing video footages from first-perspective-view of the aircraft matched with accurate ground truth for depth estimation in aircraft navigation application. We also provide quantitative experiments to prove that pre-training via our MineNavi dataset can improve the performance of depth estimation model and speed up the convergence of the model on real scene data. Since the synthetic dataset has a similar effect to the real-world dataset in the training process of deep model, we finally conduct the experiments on MineNavi with unsupervised monocular depth estimation (UMDE) deep learning models to demonstrate the impact of various factors in our dataset such as lighting conditions and motion mode, aiming to explore what makes this kind of models training better. Nature Publishing Group UK 2022-12-20 /pmc/articles/PMC9768171/ /pubmed/36539595 http://dx.doi.org/10.1038/s41598-022-26613-0 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Wang, Xiangtong
Liang, Binbin
Yang, Menglong
Li, Wei
What makes the unsupervised monocular depth estimation (UMDE) model training better
title What makes the unsupervised monocular depth estimation (UMDE) model training better
title_full What makes the unsupervised monocular depth estimation (UMDE) model training better
title_fullStr What makes the unsupervised monocular depth estimation (UMDE) model training better
title_full_unstemmed What makes the unsupervised monocular depth estimation (UMDE) model training better
title_short What makes the unsupervised monocular depth estimation (UMDE) model training better
title_sort what makes the unsupervised monocular depth estimation (umde) model training better
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9768171/
https://www.ncbi.nlm.nih.gov/pubmed/36539595
http://dx.doi.org/10.1038/s41598-022-26613-0
work_keys_str_mv AT wangxiangtong whatmakestheunsupervisedmonoculardepthestimationumdemodeltrainingbetter
AT liangbinbin whatmakestheunsupervisedmonoculardepthestimationumdemodeltrainingbetter
AT yangmenglong whatmakestheunsupervisedmonoculardepthestimationumdemodeltrainingbetter
AT liwei whatmakestheunsupervisedmonoculardepthestimationumdemodeltrainingbetter