Cargando…
Accelerating GAN training using highly parallel hardware on public cloud
With the increasing number of Machine and Deep Learning applications in High Energy Physics, easy access to dedicated infrastructure represents a requirement for fast and efficient R&D. This work explores different types of cloud services to train a Generative Adversarial Network (GAN) in a para...
Autores principales: | , , , , , |
---|---|
Lenguaje: | eng |
Publicado: |
2021
|
Materias: | |
Acceso en línea: | https://dx.doi.org/10.1051/epjconf/202125102073 http://cds.cern.ch/record/2780109 |
_version_ | 1780971850493329408 |
---|---|
author | Cardoso, Renato Golubovic, Dejan Lozada, Ignacio Peluaga Rocha, Ricardo Fernandes, João Vallecorsa, Sofia |
author_facet | Cardoso, Renato Golubovic, Dejan Lozada, Ignacio Peluaga Rocha, Ricardo Fernandes, João Vallecorsa, Sofia |
author_sort | Cardoso, Renato |
collection | CERN |
description | With the increasing number of Machine and Deep Learning applications in High Energy Physics, easy access to dedicated infrastructure represents a requirement for fast and efficient R&D. This work explores different types of cloud services to train a Generative Adversarial Network (GAN) in a parallel environment, using Tensorflow data parallel strategy. More specifically, we parallelize the training process on multiple GPUs and Google Tensor Processing Units (TPU) and we compare two algorithms: the TensorFlow built-in logic and a custom loop, optimised to have higher control of the elements assigned to each GPU worker or TPU core. The quality of the generated data is compared to Monte Carlo simulation. Linear speed-up of the training process is obtained, while retaining most of the performance in terms of physics results. Additionally, we benchmark the aforementioned approaches, at scale, over multiple GPU nodes, deploying the training process on different public cloud providers, seeking for overall efficiency and cost-effectiveness. The combination of data science, cloud deployment options and associated economics allows to burst out heterogeneously, exploring the full potential of cloud-based services. |
id | cern-2780109 |
institution | Organización Europea para la Investigación Nuclear |
language | eng |
publishDate | 2021 |
record_format | invenio |
spelling | cern-27801092023-01-31T08:07:40Zdoi:10.1051/epjconf/202125102073http://cds.cern.ch/record/2780109engCardoso, RenatoGolubovic, DejanLozada, Ignacio PeluagaRocha, RicardoFernandes, JoãoVallecorsa, SofiaAccelerating GAN training using highly parallel hardware on public cloudComputing and ComputersWith the increasing number of Machine and Deep Learning applications in High Energy Physics, easy access to dedicated infrastructure represents a requirement for fast and efficient R&D. This work explores different types of cloud services to train a Generative Adversarial Network (GAN) in a parallel environment, using Tensorflow data parallel strategy. More specifically, we parallelize the training process on multiple GPUs and Google Tensor Processing Units (TPU) and we compare two algorithms: the TensorFlow built-in logic and a custom loop, optimised to have higher control of the elements assigned to each GPU worker or TPU core. The quality of the generated data is compared to Monte Carlo simulation. Linear speed-up of the training process is obtained, while retaining most of the performance in terms of physics results. Additionally, we benchmark the aforementioned approaches, at scale, over multiple GPU nodes, deploying the training process on different public cloud providers, seeking for overall efficiency and cost-effectiveness. The combination of data science, cloud deployment options and associated economics allows to burst out heterogeneously, exploring the full potential of cloud-based services.With the increasing number of Machine and Deep Learning applications in High Energy Physics, easy access to dedicated infrastructure represents a requirement for fast and efficient R&D. This work explores different types of cloud services to train a Generative Adversarial Network (GAN) in a parallel environment, using Tensorflow data parallel strategy. More specifically, we parallelize the training process on multiple GPUs and Google Tensor Processing Units (TPU) and we compare two algorithms: the TensorFlow built-in logic and a custom loop, optimised to have higher control of the elements assigned to each GPU worker or TPU core. The quality of the generated data is compared to Monte Carlo simulation. Linear speed-up of the training process is obtained, while retaining most of the performance in terms of physics results. Additionally, we benchmark the aforementioned approaches, at scale, over multiple GPU nodes, deploying the training process on different public cloud providers, seeking for overall efficiency and cost-effectiveness. The combination of data science, cloud deployment options and associated economics allows to burst out heterogeneously, exploring the full potential of cloud-based services.arXiv:2111.04628oai:cds.cern.ch:27801092021 |
spellingShingle | Computing and Computers Cardoso, Renato Golubovic, Dejan Lozada, Ignacio Peluaga Rocha, Ricardo Fernandes, João Vallecorsa, Sofia Accelerating GAN training using highly parallel hardware on public cloud |
title | Accelerating GAN training using highly parallel hardware on public cloud |
title_full | Accelerating GAN training using highly parallel hardware on public cloud |
title_fullStr | Accelerating GAN training using highly parallel hardware on public cloud |
title_full_unstemmed | Accelerating GAN training using highly parallel hardware on public cloud |
title_short | Accelerating GAN training using highly parallel hardware on public cloud |
title_sort | accelerating gan training using highly parallel hardware on public cloud |
topic | Computing and Computers |
url | https://dx.doi.org/10.1051/epjconf/202125102073 http://cds.cern.ch/record/2780109 |
work_keys_str_mv | AT cardosorenato acceleratinggantrainingusinghighlyparallelhardwareonpubliccloud AT golubovicdejan acceleratinggantrainingusinghighlyparallelhardwareonpubliccloud AT lozadaignaciopeluaga acceleratinggantrainingusinghighlyparallelhardwareonpubliccloud AT rocharicardo acceleratinggantrainingusinghighlyparallelhardwareonpubliccloud AT fernandesjoao acceleratinggantrainingusinghighlyparallelhardwareonpubliccloud AT vallecorsasofia acceleratinggantrainingusinghighlyparallelhardwareonpubliccloud |