Cargando…

Revisiting Consistency for Semi-Supervised Semantic Segmentation †

Semi-supervised learning is an attractive technique in practical deployments of deep models since it relaxes the dependence on labeled data. It is especially important in the scope of dense prediction because pixel-level annotation requires substantial effort. This paper considers semi-supervised al...

Descripción completa

Detalles Bibliográficos
Autores principales: Grubišić, Ivan, Oršić, Marin, Šegvić, Siniša
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9865240/
https://www.ncbi.nlm.nih.gov/pubmed/36679735
http://dx.doi.org/10.3390/s23020940
_version_ 1784875787630936064
author Grubišić, Ivan
Oršić, Marin
Šegvić, Siniša
author_facet Grubišić, Ivan
Oršić, Marin
Šegvić, Siniša
author_sort Grubišić, Ivan
collection PubMed
description Semi-supervised learning is an attractive technique in practical deployments of deep models since it relaxes the dependence on labeled data. It is especially important in the scope of dense prediction because pixel-level annotation requires substantial effort. This paper considers semi-supervised algorithms that enforce consistent predictions over perturbed unlabeled inputs. We study the advantages of perturbing only one of the two model instances and preventing the backward pass through the unperturbed instance. We also propose a competitive perturbation model as a composition of geometric warp and photometric jittering. We experiment with efficient models due to their importance for real-time and low-power applications. Our experiments show clear advantages of (1) one-way consistency, (2) perturbing only the student branch, and (3) strong photometric and geometric perturbations. Our perturbation model outperforms recent work and most of the contribution comes from the photometric component. Experiments with additional data from the large coarsely annotated subset of Cityscapes suggest that semi-supervised training can outperform supervised training with coarse labels. Our source code is available at https://github.com/Ivan1248/semisup-seg-efficient.
format Online
Article
Text
id pubmed-9865240
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-98652402023-01-22 Revisiting Consistency for Semi-Supervised Semantic Segmentation † Grubišić, Ivan Oršić, Marin Šegvić, Siniša Sensors (Basel) Article Semi-supervised learning is an attractive technique in practical deployments of deep models since it relaxes the dependence on labeled data. It is especially important in the scope of dense prediction because pixel-level annotation requires substantial effort. This paper considers semi-supervised algorithms that enforce consistent predictions over perturbed unlabeled inputs. We study the advantages of perturbing only one of the two model instances and preventing the backward pass through the unperturbed instance. We also propose a competitive perturbation model as a composition of geometric warp and photometric jittering. We experiment with efficient models due to their importance for real-time and low-power applications. Our experiments show clear advantages of (1) one-way consistency, (2) perturbing only the student branch, and (3) strong photometric and geometric perturbations. Our perturbation model outperforms recent work and most of the contribution comes from the photometric component. Experiments with additional data from the large coarsely annotated subset of Cityscapes suggest that semi-supervised training can outperform supervised training with coarse labels. Our source code is available at https://github.com/Ivan1248/semisup-seg-efficient. MDPI 2023-01-13 /pmc/articles/PMC9865240/ /pubmed/36679735 http://dx.doi.org/10.3390/s23020940 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Grubišić, Ivan
Oršić, Marin
Šegvić, Siniša
Revisiting Consistency for Semi-Supervised Semantic Segmentation †
title Revisiting Consistency for Semi-Supervised Semantic Segmentation †
title_full Revisiting Consistency for Semi-Supervised Semantic Segmentation †
title_fullStr Revisiting Consistency for Semi-Supervised Semantic Segmentation †
title_full_unstemmed Revisiting Consistency for Semi-Supervised Semantic Segmentation †
title_short Revisiting Consistency for Semi-Supervised Semantic Segmentation †
title_sort revisiting consistency for semi-supervised semantic segmentation †
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9865240/
https://www.ncbi.nlm.nih.gov/pubmed/36679735
http://dx.doi.org/10.3390/s23020940
work_keys_str_mv AT grubisicivan revisitingconsistencyforsemisupervisedsemanticsegmentation
AT orsicmarin revisitingconsistencyforsemisupervisedsemanticsegmentation
AT segvicsinisa revisitingconsistencyforsemisupervisedsemanticsegmentation