Cargando…

Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks

Generative Adversarial Nets (GANs) are one of the most popular architectures for image generation, which has achieved significant progress in generating high-resolution, diverse image samples. The normal GANs are supposed to minimize the Kullback–Leibler divergence between distributions of natural a...

Descripción completa

Detalles Bibliográficos
Autores principales: Cai, Likun, Chen, Yanjie, Cai, Ning, Cheng, Wei, Wang, Hao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7516886/
https://www.ncbi.nlm.nih.gov/pubmed/33286184
http://dx.doi.org/10.3390/e22040410
_version_ 1783587102404902912
author Cai, Likun
Chen, Yanjie
Cai, Ning
Cheng, Wei
Wang, Hao
author_facet Cai, Likun
Chen, Yanjie
Cai, Ning
Cheng, Wei
Wang, Hao
author_sort Cai, Likun
collection PubMed
description Generative Adversarial Nets (GANs) are one of the most popular architectures for image generation, which has achieved significant progress in generating high-resolution, diverse image samples. The normal GANs are supposed to minimize the Kullback–Leibler divergence between distributions of natural and generated images. In this paper, we propose the Alpha-divergence Generative Adversarial Net (Alpha-GAN) which adopts the alpha divergence as the minimization objective function of generators. The alpha divergence can be regarded as a generalization of the Kullback–Leibler divergence, Pearson [Formula: see text] divergence, Hellinger divergence, etc. Our Alpha-GAN employs the power function as the form of adversarial loss for the discriminator with two-order indexes. These hyper-parameters make our model more flexible to trade off between the generated and target distributions. We further give a theoretical analysis of how to select these hyper-parameters to balance the training stability and the quality of generated images. Extensive experiments of Alpha-GAN are performed on SVHN and CelebA datasets, and evaluation results show the stability of Alpha-GAN. The generated samples are also competitive compared with the state-of-the-art approaches.
format Online
Article
Text
id pubmed-7516886
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-75168862020-11-09 Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks Cai, Likun Chen, Yanjie Cai, Ning Cheng, Wei Wang, Hao Entropy (Basel) Article Generative Adversarial Nets (GANs) are one of the most popular architectures for image generation, which has achieved significant progress in generating high-resolution, diverse image samples. The normal GANs are supposed to minimize the Kullback–Leibler divergence between distributions of natural and generated images. In this paper, we propose the Alpha-divergence Generative Adversarial Net (Alpha-GAN) which adopts the alpha divergence as the minimization objective function of generators. The alpha divergence can be regarded as a generalization of the Kullback–Leibler divergence, Pearson [Formula: see text] divergence, Hellinger divergence, etc. Our Alpha-GAN employs the power function as the form of adversarial loss for the discriminator with two-order indexes. These hyper-parameters make our model more flexible to trade off between the generated and target distributions. We further give a theoretical analysis of how to select these hyper-parameters to balance the training stability and the quality of generated images. Extensive experiments of Alpha-GAN are performed on SVHN and CelebA datasets, and evaluation results show the stability of Alpha-GAN. The generated samples are also competitive compared with the state-of-the-art approaches. MDPI 2020-04-04 /pmc/articles/PMC7516886/ /pubmed/33286184 http://dx.doi.org/10.3390/e22040410 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Cai, Likun
Chen, Yanjie
Cai, Ning
Cheng, Wei
Wang, Hao
Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks
title Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks
title_full Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks
title_fullStr Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks
title_full_unstemmed Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks
title_short Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks
title_sort utilizing amari-alpha divergence to stabilize the training of generative adversarial networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7516886/
https://www.ncbi.nlm.nih.gov/pubmed/33286184
http://dx.doi.org/10.3390/e22040410
work_keys_str_mv AT cailikun utilizingamarialphadivergencetostabilizethetrainingofgenerativeadversarialnetworks
AT chenyanjie utilizingamarialphadivergencetostabilizethetrainingofgenerativeadversarialnetworks
AT caining utilizingamarialphadivergencetostabilizethetrainingofgenerativeadversarialnetworks
AT chengwei utilizingamarialphadivergencetostabilizethetrainingofgenerativeadversarialnetworks
AT wanghao utilizingamarialphadivergencetostabilizethetrainingofgenerativeadversarialnetworks