Cargando…
Accelerating GAN training using highly parallel hardware on public cloud
<!--HTML-->With the increasing number of Machine and Deep Learning applications in High Energy Physics, easy access to dedicated infrastructure represents a requirement for fast and eficient R&D. This work explores different types of cloud services to train a Generative Adversarial Network...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2021
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/2767302 |
_version_ | 1780971291430354944 |
---|---|
author | Da Costa Cardoso, Renato Paulo |
author_facet | Da Costa Cardoso, Renato Paulo |
author_sort | Da Costa Cardoso, Renato Paulo |
collection | CERN |
description | <!--HTML-->With the increasing number of Machine and Deep Learning applications
in High Energy Physics, easy access to dedicated infrastructure represents
a requirement for fast and eficient R&D. This work explores different types
of cloud services to train a Generative Adversarial Network (GAN) in a parallel
environment, using Tensorflow data parallel strategy. More specifically,
we parallelize the training process on multiple GPUs and Google Tensor Processing
Units (TPU) and we compare two algorithms: the TensorFlow built-in
logic and a custom loop, optimised to have higher control of the elements assigned
to each GPU worker or TPU core. The quality of the generated data
is compared to Monte Carlo simulation. Linear speed-up of the training process
is obtained, while retaining most of the performance in terms of physics
results. Additionally, we benchmark the aforementioned approaches, at scale,
over multiple GPU nodes, deploying the training process on different public
cloud providers, seeking for overall eficiency and cost-effectiveness. The combination
of data science, cloud deployment options and associated economics
allows to burst out heterogeneously, exploring the full potential of cloud-based
services. |
id | cern-2767302 |
institution | Organización Europea para la Investigación Nuclear |
language | eng |
publishDate | 2021 |
record_format | invenio |
spelling | cern-27673022022-11-02T22:25:36Zhttp://cds.cern.ch/record/2767302engDa Costa Cardoso, Renato PauloAccelerating GAN training using highly parallel hardware on public cloud25th International Conference on Computing in High Energy & Nuclear PhysicsConferences<!--HTML-->With the increasing number of Machine and Deep Learning applications in High Energy Physics, easy access to dedicated infrastructure represents a requirement for fast and eficient R&D. This work explores different types of cloud services to train a Generative Adversarial Network (GAN) in a parallel environment, using Tensorflow data parallel strategy. More specifically, we parallelize the training process on multiple GPUs and Google Tensor Processing Units (TPU) and we compare two algorithms: the TensorFlow built-in logic and a custom loop, optimised to have higher control of the elements assigned to each GPU worker or TPU core. The quality of the generated data is compared to Monte Carlo simulation. Linear speed-up of the training process is obtained, while retaining most of the performance in terms of physics results. Additionally, we benchmark the aforementioned approaches, at scale, over multiple GPU nodes, deploying the training process on different public cloud providers, seeking for overall eficiency and cost-effectiveness. The combination of data science, cloud deployment options and associated economics allows to burst out heterogeneously, exploring the full potential of cloud-based services.oai:cds.cern.ch:27673022021 |
spellingShingle | Conferences Da Costa Cardoso, Renato Paulo Accelerating GAN training using highly parallel hardware on public cloud |
title | Accelerating GAN training using highly parallel hardware on public cloud |
title_full | Accelerating GAN training using highly parallel hardware on public cloud |
title_fullStr | Accelerating GAN training using highly parallel hardware on public cloud |
title_full_unstemmed | Accelerating GAN training using highly parallel hardware on public cloud |
title_short | Accelerating GAN training using highly parallel hardware on public cloud |
title_sort | accelerating gan training using highly parallel hardware on public cloud |
topic | Conferences |
url | http://cds.cern.ch/record/2767302 |
work_keys_str_mv | AT dacostacardosorenatopaulo acceleratinggantrainingusinghighlyparallelhardwareonpubliccloud AT dacostacardosorenatopaulo 25thinternationalconferenceoncomputinginhighenergynuclearphysics |