Cargando…
Dual Guided Aggregation Network for Stereo Image Matching
Stereo image dense matching, which plays a key role in 3D reconstruction, remains a challenging task in photogrammetry and computer vision. In addition to block-based matching, recent studies based on artificial neural networks have achieved great progress in stereo matching by using deep convolutio...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9414513/ https://www.ncbi.nlm.nih.gov/pubmed/36015872 http://dx.doi.org/10.3390/s22166111 |
_version_ | 1784776006315278336 |
---|---|
author | Wang, Ruei-Ping Lin, Chao-Hung |
author_facet | Wang, Ruei-Ping Lin, Chao-Hung |
author_sort | Wang, Ruei-Ping |
collection | PubMed |
description | Stereo image dense matching, which plays a key role in 3D reconstruction, remains a challenging task in photogrammetry and computer vision. In addition to block-based matching, recent studies based on artificial neural networks have achieved great progress in stereo matching by using deep convolutional networks. This study proposes a novel network called a dual guided aggregation network (Dual-GANet), which utilizes both left-to-right and right-to-left image matchings in network design and training to reduce the possibility of pixel mismatch. Flipped training with a cost volume consistentization is introduced to realize the learning of invisible-to-visible pixel matching and left–right consistency matching. In addition, suppressed multi-regression is proposed, which suppresses unrelated information before regression and selects multiple peaks from a disparity probability distribution. The proposed dual network with the left–right consistent matching scheme can be applied to most stereo matching models. To estimate the performance, GANet, which is designed based on semi-global matching, was selected as the backbone with extensions and modifications on guided aggregation, disparity regression, and loss function. Experimental results on the SceneFlow and KITTI2015 datasets demonstrate the superiority of the Dual-GANet compared to related models in terms of average end-point-error (EPE) and pixel error rate (ER). The Dual-GANet with an average EPE performance = 0.418 and ER (>1 pixel) = 5.81% for SceneFlow and average EPE = 0.589 and ER (>3 pixels) = 1.76% for KITTI2005 is better than the backbone model with the average EPE performance of = 0.440 and ER (>1 pixel) = 6.56% for SceneFlow and average EPE = 0.790 and ER (>3 pixels) = 2.32% for KITTI2005. |
format | Online Article Text |
id | pubmed-9414513 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-94145132022-08-27 Dual Guided Aggregation Network for Stereo Image Matching Wang, Ruei-Ping Lin, Chao-Hung Sensors (Basel) Article Stereo image dense matching, which plays a key role in 3D reconstruction, remains a challenging task in photogrammetry and computer vision. In addition to block-based matching, recent studies based on artificial neural networks have achieved great progress in stereo matching by using deep convolutional networks. This study proposes a novel network called a dual guided aggregation network (Dual-GANet), which utilizes both left-to-right and right-to-left image matchings in network design and training to reduce the possibility of pixel mismatch. Flipped training with a cost volume consistentization is introduced to realize the learning of invisible-to-visible pixel matching and left–right consistency matching. In addition, suppressed multi-regression is proposed, which suppresses unrelated information before regression and selects multiple peaks from a disparity probability distribution. The proposed dual network with the left–right consistent matching scheme can be applied to most stereo matching models. To estimate the performance, GANet, which is designed based on semi-global matching, was selected as the backbone with extensions and modifications on guided aggregation, disparity regression, and loss function. Experimental results on the SceneFlow and KITTI2015 datasets demonstrate the superiority of the Dual-GANet compared to related models in terms of average end-point-error (EPE) and pixel error rate (ER). The Dual-GANet with an average EPE performance = 0.418 and ER (>1 pixel) = 5.81% for SceneFlow and average EPE = 0.589 and ER (>3 pixels) = 1.76% for KITTI2005 is better than the backbone model with the average EPE performance of = 0.440 and ER (>1 pixel) = 6.56% for SceneFlow and average EPE = 0.790 and ER (>3 pixels) = 2.32% for KITTI2005. MDPI 2022-08-16 /pmc/articles/PMC9414513/ /pubmed/36015872 http://dx.doi.org/10.3390/s22166111 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Wang, Ruei-Ping Lin, Chao-Hung Dual Guided Aggregation Network for Stereo Image Matching |
title | Dual Guided Aggregation Network for Stereo Image Matching |
title_full | Dual Guided Aggregation Network for Stereo Image Matching |
title_fullStr | Dual Guided Aggregation Network for Stereo Image Matching |
title_full_unstemmed | Dual Guided Aggregation Network for Stereo Image Matching |
title_short | Dual Guided Aggregation Network for Stereo Image Matching |
title_sort | dual guided aggregation network for stereo image matching |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9414513/ https://www.ncbi.nlm.nih.gov/pubmed/36015872 http://dx.doi.org/10.3390/s22166111 |
work_keys_str_mv | AT wangrueiping dualguidedaggregationnetworkforstereoimagematching AT linchaohung dualguidedaggregationnetworkforstereoimagematching |