Cargando…

Distributed Training of Generative Adversarial Networks for Fast Simulation

<!--HTML-->Deep Learning techniques are being studied for different applications by the HEP community: in this talk, we discuss the case of detector simulation. The need for simulated events, expected in the future for LHC experiments and their High Luminosity upgrades, is increasing dramatica...

Descripción completa

Detalles Bibliográficos
Autores principales: Vallecorsa, Sofia, Khattak, Gul Rukh
Lenguaje:eng
Publicado: 2019
Materias:
Acceso en línea:http://cds.cern.ch/record/2692155
_version_ 1780963928669421568
author Vallecorsa, Sofia
Khattak, Gul Rukh
author_facet Vallecorsa, Sofia
Khattak, Gul Rukh
author_sort Vallecorsa, Sofia
collection CERN
description <!--HTML-->Deep Learning techniques are being studied for different applications by the HEP community: in this talk, we discuss the case of detector simulation. The need for simulated events, expected in the future for LHC experiments and their High Luminosity upgrades, is increasing dramatically and requires new fast simulation solutions. We will describe an R&D activity within CERN openlab, aimed at providing a configurable tool capable of training a neural network to reproduce the detector response and replace standard Monte Carlo simulation. This represents a generic approach in the sense that such a network could be designed and trained to simulate any kind of detector in just a small fraction of time. We will present the first application of three-dimensional convolutional Generative Adversarial Networks to the simulation of high granularity electromagnetic calorimeters. We have implemented our model using Keras + Tensorflow, and we have tested distributed training using the Horovod framework: performance of the parallelization of GAN training on HPC clusters will be discussed in details. Results of preliminary runs conducted on the Stampede2 cluster, at TACC, were presented at the SC’18 IXPUG workshop last year and close-to-linear scaling was measured up to 128 nodes. Since then we have further improved performance on single nodes, thus reducing both training and inference time. This results in a 20000x speedup with respect to standard Monte Carlo simulation. A detailed discussion of physics performance at scale will also be discussed
id cern-2692155
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2019
record_format invenio
spelling cern-26921552022-11-02T22:24:39Zhttp://cds.cern.ch/record/2692155engVallecorsa, SofiaKhattak, Gul RukhDistributed Training of Generative Adversarial Networks for Fast SimulationIXPUG 2019 Annual Conference at CERNother events or meetings<!--HTML-->Deep Learning techniques are being studied for different applications by the HEP community: in this talk, we discuss the case of detector simulation. The need for simulated events, expected in the future for LHC experiments and their High Luminosity upgrades, is increasing dramatically and requires new fast simulation solutions. We will describe an R&D activity within CERN openlab, aimed at providing a configurable tool capable of training a neural network to reproduce the detector response and replace standard Monte Carlo simulation. This represents a generic approach in the sense that such a network could be designed and trained to simulate any kind of detector in just a small fraction of time. We will present the first application of three-dimensional convolutional Generative Adversarial Networks to the simulation of high granularity electromagnetic calorimeters. We have implemented our model using Keras + Tensorflow, and we have tested distributed training using the Horovod framework: performance of the parallelization of GAN training on HPC clusters will be discussed in details. Results of preliminary runs conducted on the Stampede2 cluster, at TACC, were presented at the SC’18 IXPUG workshop last year and close-to-linear scaling was measured up to 128 nodes. Since then we have further improved performance on single nodes, thus reducing both training and inference time. This results in a 20000x speedup with respect to standard Monte Carlo simulation. A detailed discussion of physics performance at scale will also be discussedoai:cds.cern.ch:26921552019
spellingShingle other events or meetings
Vallecorsa, Sofia
Khattak, Gul Rukh
Distributed Training of Generative Adversarial Networks for Fast Simulation
title Distributed Training of Generative Adversarial Networks for Fast Simulation
title_full Distributed Training of Generative Adversarial Networks for Fast Simulation
title_fullStr Distributed Training of Generative Adversarial Networks for Fast Simulation
title_full_unstemmed Distributed Training of Generative Adversarial Networks for Fast Simulation
title_short Distributed Training of Generative Adversarial Networks for Fast Simulation
title_sort distributed training of generative adversarial networks for fast simulation
topic other events or meetings
url http://cds.cern.ch/record/2692155
work_keys_str_mv AT vallecorsasofia distributedtrainingofgenerativeadversarialnetworksforfastsimulation
AT khattakgulrukh distributedtrainingofgenerativeadversarialnetworksforfastsimulation
AT vallecorsasofia ixpug2019annualconferenceatcern
AT khattakgulrukh ixpug2019annualconferenceatcern