Cargando…

Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches

Top-performing computer vision models are powered by convolutional neural networks (CNNs). Training an accurate CNN highly depends on both the raw sensor data and their associated ground truth (GT). Collecting such GT is usually done through human labeling, which is time-consuming and does not scale...

Descripción completa

Detalles Bibliográficos
Autores principales: Gómez, Jose L., Villalonga, Gabriel, López, Antonio M.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8125436/
https://www.ncbi.nlm.nih.gov/pubmed/34064323
http://dx.doi.org/10.3390/s21093185
_version_ 1783693501753458688
author Gómez, Jose L.
Villalonga, Gabriel
López, Antonio M.
author_facet Gómez, Jose L.
Villalonga, Gabriel
López, Antonio M.
author_sort Gómez, Jose L.
collection PubMed
description Top-performing computer vision models are powered by convolutional neural networks (CNNs). Training an accurate CNN highly depends on both the raw sensor data and their associated ground truth (GT). Collecting such GT is usually done through human labeling, which is time-consuming and does not scale as we wish. This data-labeling bottleneck may be intensified due to domain shifts among image sensors, which could force per-sensor data labeling. In this paper, we focus on the use of co-training, a semi-supervised learning (SSL) method, for obtaining self-labeled object bounding boxes (BBs), i.e., the GT to train deep object detectors. In particular, we assess the goodness of multi-modal co-training by relying on two different views of an image, namely, appearance (RGB) and estimated depth (D). Moreover, we compare appearance-based single-modal co-training with multi-modal. Our results suggest that in a standard SSL setting (no domain shift, a few human-labeled data) and under virtual-to-real domain shift (many virtual-world labeled data, no human-labeled data) multi-modal co-training outperforms single-modal. In the latter case, by performing GAN-based domain translation both co-training modalities are on par, at least when using an off-the-shelf depth estimation model not specifically trained on the translated images.
format Online
Article
Text
id pubmed-8125436
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-81254362021-05-17 Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches Gómez, Jose L. Villalonga, Gabriel López, Antonio M. Sensors (Basel) Article Top-performing computer vision models are powered by convolutional neural networks (CNNs). Training an accurate CNN highly depends on both the raw sensor data and their associated ground truth (GT). Collecting such GT is usually done through human labeling, which is time-consuming and does not scale as we wish. This data-labeling bottleneck may be intensified due to domain shifts among image sensors, which could force per-sensor data labeling. In this paper, we focus on the use of co-training, a semi-supervised learning (SSL) method, for obtaining self-labeled object bounding boxes (BBs), i.e., the GT to train deep object detectors. In particular, we assess the goodness of multi-modal co-training by relying on two different views of an image, namely, appearance (RGB) and estimated depth (D). Moreover, we compare appearance-based single-modal co-training with multi-modal. Our results suggest that in a standard SSL setting (no domain shift, a few human-labeled data) and under virtual-to-real domain shift (many virtual-world labeled data, no human-labeled data) multi-modal co-training outperforms single-modal. In the latter case, by performing GAN-based domain translation both co-training modalities are on par, at least when using an off-the-shelf depth estimation model not specifically trained on the translated images. MDPI 2021-05-04 /pmc/articles/PMC8125436/ /pubmed/34064323 http://dx.doi.org/10.3390/s21093185 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Gómez, Jose L.
Villalonga, Gabriel
López, Antonio M.
Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches
title Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches
title_full Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches
title_fullStr Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches
title_full_unstemmed Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches
title_short Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches
title_sort co-training for deep object detection: comparing single-modal and multi-modal approaches
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8125436/
https://www.ncbi.nlm.nih.gov/pubmed/34064323
http://dx.doi.org/10.3390/s21093185
work_keys_str_mv AT gomezjosel cotrainingfordeepobjectdetectioncomparingsinglemodalandmultimodalapproaches
AT villalongagabriel cotrainingfordeepobjectdetectioncomparingsinglemodalandmultimodalapproaches
AT lopezantoniom cotrainingfordeepobjectdetectioncomparingsinglemodalandmultimodalapproaches