Cargando…

THE SUCCESS OF DEEP GENERATIVE MODELS

<!--HTML--><p>Deep generative models allow us to learn hidden representations of data and generate new examples. There are two major families of models that are exploited in current applications: Generative Adversarial Networks (GANs), and Variational Auto-Encoders (VAE). The principle o...

Descripción completa

Detalles Bibliográficos
Autor principal: Tomczak, Jakub
Lenguaje:eng
Publicado: 2018
Materias:
Acceso en línea:http://cds.cern.ch/record/2628487
_version_ 1780959170973925376
author Tomczak, Jakub
author_facet Tomczak, Jakub
author_sort Tomczak, Jakub
collection CERN
description <!--HTML--><p>Deep generative models allow us to learn hidden representations of data and generate new examples. There are two major families of models that are exploited in current applications: Generative Adversarial Networks (GANs), and Variational Auto-Encoders (VAE). The principle of GANs is to train a generator that can generate examples from random noise, in adversary of a discriminative model that is forced to confuse true samples from generated ones. Generated images by GANs are very sharp and detailed. The biggest disadvantage of GANs is that they are trained through solving a minimax optimization problem that causes significant learning instability issues. VAEs are based on a fully probabilistic perspective of the variational inference. The learning problem aims at maximizing the variational lower bound for a given family of variational posteriors. The model can be trained by backpropagation but it was noticed that the resulting generated images are rather blurry. However, VAEs are probabilistic models, thus, they could be incorporated in almost any probabilistic framework. We will discuss basics of both approaches and present recent extensions. We will point out advantages and disadvantages of GANs and VAE. Some of most promising applications of deep generative models will be shown.</p>
id cern-2628487
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2018
record_format invenio
spelling cern-26284872022-11-02T22:31:43Zhttp://cds.cern.ch/record/2628487engTomczak, JakubTHE SUCCESS OF DEEP GENERATIVE MODELSTHE SUCCESS OF DEEP GENERATIVE MODELSEP-IT Data science seminars<!--HTML--><p>Deep generative models allow us to learn hidden representations of data and generate new examples. There are two major families of models that are exploited in current applications: Generative Adversarial Networks (GANs), and Variational Auto-Encoders (VAE). The principle of GANs is to train a generator that can generate examples from random noise, in adversary of a discriminative model that is forced to confuse true samples from generated ones. Generated images by GANs are very sharp and detailed. The biggest disadvantage of GANs is that they are trained through solving a minimax optimization problem that causes significant learning instability issues. VAEs are based on a fully probabilistic perspective of the variational inference. The learning problem aims at maximizing the variational lower bound for a given family of variational posteriors. The model can be trained by backpropagation but it was noticed that the resulting generated images are rather blurry. However, VAEs are probabilistic models, thus, they could be incorporated in almost any probabilistic framework. We will discuss basics of both approaches and present recent extensions. We will point out advantages and disadvantages of GANs and VAE. Some of most promising applications of deep generative models will be shown.</p>oai:cds.cern.ch:26284872018
spellingShingle EP-IT Data science seminars
Tomczak, Jakub
THE SUCCESS OF DEEP GENERATIVE MODELS
title THE SUCCESS OF DEEP GENERATIVE MODELS
title_full THE SUCCESS OF DEEP GENERATIVE MODELS
title_fullStr THE SUCCESS OF DEEP GENERATIVE MODELS
title_full_unstemmed THE SUCCESS OF DEEP GENERATIVE MODELS
title_short THE SUCCESS OF DEEP GENERATIVE MODELS
title_sort success of deep generative models
topic EP-IT Data science seminars
url http://cds.cern.ch/record/2628487
work_keys_str_mv AT tomczakjakub thesuccessofdeepgenerativemodels
AT tomczakjakub successofdeepgenerativemodels