Cargando…
Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks
Generative Adversarial Nets (GANs) are one of the most popular architectures for image generation, which has achieved significant progress in generating high-resolution, diverse image samples. The normal GANs are supposed to minimize the Kullback–Leibler divergence between distributions of natural a...
Autores principales: | Cai, Likun, Chen, Yanjie, Cai, Ning, Cheng, Wei, Wang, Hao |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7516886/ https://www.ncbi.nlm.nih.gov/pubmed/33286184 http://dx.doi.org/10.3390/e22040410 |
Ejemplares similares
-
The SCHWIND AMARIS Total-Tech Laser as An All-Rounder in Refractive Surgery
por: Arbelaez, Maria Clara, et al.
Publicado: (2009) -
Distributed Training of Generative Adversarial Networks for Fast Simulation
por: Vallecorsa, Sofia, et al.
Publicado: (2019) -
Impact of quantum noise on the training of quantum Generative Adversarial Networks
por: Borras, Kerstin, et al.
Publicado: (2023) -
Evaluating POWER Architecture for Distributed Training of Generative Adversarial Networks
por: Hesam, Ahmad, et al.
Publicado: (2019) -
Generating mobility networks with generative adversarial networks
por: Mauro, Giovanni, et al.
Publicado: (2022)