Cargando…

Building One-Shot Semi-Supervised (BOSS) Learning Up to Fully Supervised Performance

Reaching the performance of fully supervised learning with unlabeled data and only labeling one sample per class might be ideal for deep learning applications. We demonstrate for the first time the potential for building one-shot semi-supervised (BOSS) learning on CIFAR-10 and SVHN up to attain test...

Descripción completa

Detalles Bibliográficos
Autores principales: Smith, Leslie N., Conovaloff, Adam
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9200967/
https://www.ncbi.nlm.nih.gov/pubmed/35719691
http://dx.doi.org/10.3389/frai.2022.880729
_version_ 1784728182106095616
author Smith, Leslie N.
Conovaloff, Adam
author_facet Smith, Leslie N.
Conovaloff, Adam
author_sort Smith, Leslie N.
collection PubMed
description Reaching the performance of fully supervised learning with unlabeled data and only labeling one sample per class might be ideal for deep learning applications. We demonstrate for the first time the potential for building one-shot semi-supervised (BOSS) learning on CIFAR-10 and SVHN up to attain test accuracies that are comparable to fully supervised learning. Our method combines class prototype refining, class balancing, and self-training. A good prototype choice is essential and we propose a technique for obtaining iconic examples. In addition, we demonstrate that class balancing methods substantially improve accuracy results in semi-supervised learning to levels that allow self-training to reach the level of fully supervised learning performance. Our experiments demonstrate the value with computing and analyzing test accuracies for every class, rather than only a total test accuracy. We show that our BOSS methodology can obtain total test accuracies with CIFAR-10 images and only one labeled sample per class up to 95% (compared to 94.5% for fully supervised). Similarly, the SVHN images obtains test accuracies of 97.8%, compared to 98.27% for fully supervised. Rigorous empirical evaluations provide evidence that labeling large datasets is not necessary for training deep neural networks. Our code is available at https://github.com/lnsmith54/BOSS to facilitate replication.
format Online
Article
Text
id pubmed-9200967
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-92009672022-06-17 Building One-Shot Semi-Supervised (BOSS) Learning Up to Fully Supervised Performance Smith, Leslie N. Conovaloff, Adam Front Artif Intell Artificial Intelligence Reaching the performance of fully supervised learning with unlabeled data and only labeling one sample per class might be ideal for deep learning applications. We demonstrate for the first time the potential for building one-shot semi-supervised (BOSS) learning on CIFAR-10 and SVHN up to attain test accuracies that are comparable to fully supervised learning. Our method combines class prototype refining, class balancing, and self-training. A good prototype choice is essential and we propose a technique for obtaining iconic examples. In addition, we demonstrate that class balancing methods substantially improve accuracy results in semi-supervised learning to levels that allow self-training to reach the level of fully supervised learning performance. Our experiments demonstrate the value with computing and analyzing test accuracies for every class, rather than only a total test accuracy. We show that our BOSS methodology can obtain total test accuracies with CIFAR-10 images and only one labeled sample per class up to 95% (compared to 94.5% for fully supervised). Similarly, the SVHN images obtains test accuracies of 97.8%, compared to 98.27% for fully supervised. Rigorous empirical evaluations provide evidence that labeling large datasets is not necessary for training deep neural networks. Our code is available at https://github.com/lnsmith54/BOSS to facilitate replication. Frontiers Media S.A. 2022-06-02 /pmc/articles/PMC9200967/ /pubmed/35719691 http://dx.doi.org/10.3389/frai.2022.880729 Text en Copyright © 2022 Smith and Conovaloff. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Smith, Leslie N.
Conovaloff, Adam
Building One-Shot Semi-Supervised (BOSS) Learning Up to Fully Supervised Performance
title Building One-Shot Semi-Supervised (BOSS) Learning Up to Fully Supervised Performance
title_full Building One-Shot Semi-Supervised (BOSS) Learning Up to Fully Supervised Performance
title_fullStr Building One-Shot Semi-Supervised (BOSS) Learning Up to Fully Supervised Performance
title_full_unstemmed Building One-Shot Semi-Supervised (BOSS) Learning Up to Fully Supervised Performance
title_short Building One-Shot Semi-Supervised (BOSS) Learning Up to Fully Supervised Performance
title_sort building one-shot semi-supervised (boss) learning up to fully supervised performance
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9200967/
https://www.ncbi.nlm.nih.gov/pubmed/35719691
http://dx.doi.org/10.3389/frai.2022.880729
work_keys_str_mv AT smithleslien buildingoneshotsemisupervisedbosslearninguptofullysupervisedperformance
AT conovaloffadam buildingoneshotsemisupervisedbosslearninguptofullysupervisedperformance