Cargando…
Generative Adversarial Training for Supervised and Semi-supervised Learning
Neural networks have played critical roles in many research fields. The recently proposed adversarial training (AT) can improve the generalization ability of neural networks by adding intentional perturbations in the training process, but sometimes still fail to generate worst-case perturbations, th...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8988301/ https://www.ncbi.nlm.nih.gov/pubmed/35401139 http://dx.doi.org/10.3389/fnbot.2022.859610 |
_version_ | 1784682931650822144 |
---|---|
author | Wang, Xianmin Li, Jing Liu, Qi Zhao, Wenpeng Li, Zuoyong Wang, Wenhao |
author_facet | Wang, Xianmin Li, Jing Liu, Qi Zhao, Wenpeng Li, Zuoyong Wang, Wenhao |
author_sort | Wang, Xianmin |
collection | PubMed |
description | Neural networks have played critical roles in many research fields. The recently proposed adversarial training (AT) can improve the generalization ability of neural networks by adding intentional perturbations in the training process, but sometimes still fail to generate worst-case perturbations, thus resulting in limited improvement. Instead of designing a specific smoothness function and seeking an approximate solution used in existing AT methods, we propose a new training methodology, named Generative AT (GAT) in this article, for supervised and semi-supervised learning. The key idea of GAT is to formulate the learning task as a minimax game, in which the perturbation generator aims to yield the worst-case perturbations that maximize the deviation of output distribution, while the target classifier is to minimize the impact of this perturbation and prediction error. To solve this minimax optimization problem, a new adversarial loss function is constructed based on the cross-entropy measure. As a result, the smoothness and confidence of the model are both greatly improved. Moreover, we develop a trajectory-preserving-based alternating update strategy to enable the stable training of GAT. Numerous experiments conducted on benchmark datasets clearly demonstrate that the proposed GAT significantly outperforms the state-of-the-art AT methods in terms of supervised and semi-supervised learning tasks, especially when the number of labeled examples is rather small in semi-supervised learning. |
format | Online Article Text |
id | pubmed-8988301 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-89883012022-04-08 Generative Adversarial Training for Supervised and Semi-supervised Learning Wang, Xianmin Li, Jing Liu, Qi Zhao, Wenpeng Li, Zuoyong Wang, Wenhao Front Neurorobot Neuroscience Neural networks have played critical roles in many research fields. The recently proposed adversarial training (AT) can improve the generalization ability of neural networks by adding intentional perturbations in the training process, but sometimes still fail to generate worst-case perturbations, thus resulting in limited improvement. Instead of designing a specific smoothness function and seeking an approximate solution used in existing AT methods, we propose a new training methodology, named Generative AT (GAT) in this article, for supervised and semi-supervised learning. The key idea of GAT is to formulate the learning task as a minimax game, in which the perturbation generator aims to yield the worst-case perturbations that maximize the deviation of output distribution, while the target classifier is to minimize the impact of this perturbation and prediction error. To solve this minimax optimization problem, a new adversarial loss function is constructed based on the cross-entropy measure. As a result, the smoothness and confidence of the model are both greatly improved. Moreover, we develop a trajectory-preserving-based alternating update strategy to enable the stable training of GAT. Numerous experiments conducted on benchmark datasets clearly demonstrate that the proposed GAT significantly outperforms the state-of-the-art AT methods in terms of supervised and semi-supervised learning tasks, especially when the number of labeled examples is rather small in semi-supervised learning. Frontiers Media S.A. 2022-03-24 /pmc/articles/PMC8988301/ /pubmed/35401139 http://dx.doi.org/10.3389/fnbot.2022.859610 Text en Copyright © 2022 Wang, Li, Liu, Zhao, Li and Wang. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Wang, Xianmin Li, Jing Liu, Qi Zhao, Wenpeng Li, Zuoyong Wang, Wenhao Generative Adversarial Training for Supervised and Semi-supervised Learning |
title | Generative Adversarial Training for Supervised and Semi-supervised Learning |
title_full | Generative Adversarial Training for Supervised and Semi-supervised Learning |
title_fullStr | Generative Adversarial Training for Supervised and Semi-supervised Learning |
title_full_unstemmed | Generative Adversarial Training for Supervised and Semi-supervised Learning |
title_short | Generative Adversarial Training for Supervised and Semi-supervised Learning |
title_sort | generative adversarial training for supervised and semi-supervised learning |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8988301/ https://www.ncbi.nlm.nih.gov/pubmed/35401139 http://dx.doi.org/10.3389/fnbot.2022.859610 |
work_keys_str_mv | AT wangxianmin generativeadversarialtrainingforsupervisedandsemisupervisedlearning AT lijing generativeadversarialtrainingforsupervisedandsemisupervisedlearning AT liuqi generativeadversarialtrainingforsupervisedandsemisupervisedlearning AT zhaowenpeng generativeadversarialtrainingforsupervisedandsemisupervisedlearning AT lizuoyong generativeadversarialtrainingforsupervisedandsemisupervisedlearning AT wangwenhao generativeadversarialtrainingforsupervisedandsemisupervisedlearning |