Cargando…
Large-scale distributed training applied to generative adversarial networks for calorimeter simulation
In recent years, several studies have demonstrated the benefit of using deep learning to solve typical tasks related to high energy physics data taking and analysis. In particular, generative adversarial networks are a good candidate to supplement the simulation of the detector response in a collide...
Autores principales: | , , , , , , , |
---|---|
Lenguaje: | eng |
Publicado: |
2019
|
Materias: | |
Acceso en línea: | https://dx.doi.org/10.1051/epjconf/201921406025 http://cds.cern.ch/record/2699586 |
_version_ | 1780964499360055296 |
---|---|
author | Vlimant, Jean-Roch Pantaleo, Felice Pierini, Maurizio Loncar, Vladimir Vallecorsa, Sofia Anderson, Dustin Nguyen, Thong Zlokapa, Alexander |
author_facet | Vlimant, Jean-Roch Pantaleo, Felice Pierini, Maurizio Loncar, Vladimir Vallecorsa, Sofia Anderson, Dustin Nguyen, Thong Zlokapa, Alexander |
author_sort | Vlimant, Jean-Roch |
collection | CERN |
description | In recent years, several studies have demonstrated the benefit of using deep learning to solve typical tasks related to high energy physics data taking and analysis. In particular, generative adversarial networks are a good candidate to supplement the simulation of the detector response in a collider environment. Training of neural network models has been made tractable with the improvement of optimization methods and the advent of GP-GPU well adapted to tackle the highly-parallelizable task of training neural nets. Despite these advancements, training of large models over large data sets can take days to weeks. Even more so, finding the best model architecture and settings can take many expensive trials. To get the best out of this new technology, it is important to scale up the available network-training resources and, consequently, to provide tools for optimal large-scale distributed training. In this context, our development of a new training workflow, which scales on multi-node/multi-GPU architectures with an eye to deployment on high performance computing machines is described. We describe the integration of hyper parameter optimization with a distributed training framework using Message Passing Interface, for models defined in keras [12] or pytorch [13]. We present results on the speedup of training generative adversarial networks trained on a data set composed of the energy deposition from electron, photons, charged and neutral hadrons in a fine grained digital calorimeter. |
id | oai-inspirehep.net-1761288 |
institution | Organización Europea para la Investigación Nuclear |
language | eng |
publishDate | 2019 |
record_format | invenio |
spelling | oai-inspirehep.net-17612882022-08-10T12:26:58Zdoi:10.1051/epjconf/201921406025http://cds.cern.ch/record/2699586engVlimant, Jean-RochPantaleo, FelicePierini, MaurizioLoncar, VladimirVallecorsa, SofiaAnderson, DustinNguyen, ThongZlokapa, AlexanderLarge-scale distributed training applied to generative adversarial networks for calorimeter simulationComputing and ComputersDetectors and Experimental TechniquesIn recent years, several studies have demonstrated the benefit of using deep learning to solve typical tasks related to high energy physics data taking and analysis. In particular, generative adversarial networks are a good candidate to supplement the simulation of the detector response in a collider environment. Training of neural network models has been made tractable with the improvement of optimization methods and the advent of GP-GPU well adapted to tackle the highly-parallelizable task of training neural nets. Despite these advancements, training of large models over large data sets can take days to weeks. Even more so, finding the best model architecture and settings can take many expensive trials. To get the best out of this new technology, it is important to scale up the available network-training resources and, consequently, to provide tools for optimal large-scale distributed training. In this context, our development of a new training workflow, which scales on multi-node/multi-GPU architectures with an eye to deployment on high performance computing machines is described. We describe the integration of hyper parameter optimization with a distributed training framework using Message Passing Interface, for models defined in keras [12] or pytorch [13]. We present results on the speedup of training generative adversarial networks trained on a data set composed of the energy deposition from electron, photons, charged and neutral hadrons in a fine grained digital calorimeter.oai:inspirehep.net:17612882019 |
spellingShingle | Computing and Computers Detectors and Experimental Techniques Vlimant, Jean-Roch Pantaleo, Felice Pierini, Maurizio Loncar, Vladimir Vallecorsa, Sofia Anderson, Dustin Nguyen, Thong Zlokapa, Alexander Large-scale distributed training applied to generative adversarial networks for calorimeter simulation |
title | Large-scale distributed training applied to generative adversarial networks for calorimeter simulation |
title_full | Large-scale distributed training applied to generative adversarial networks for calorimeter simulation |
title_fullStr | Large-scale distributed training applied to generative adversarial networks for calorimeter simulation |
title_full_unstemmed | Large-scale distributed training applied to generative adversarial networks for calorimeter simulation |
title_short | Large-scale distributed training applied to generative adversarial networks for calorimeter simulation |
title_sort | large-scale distributed training applied to generative adversarial networks for calorimeter simulation |
topic | Computing and Computers Detectors and Experimental Techniques |
url | https://dx.doi.org/10.1051/epjconf/201921406025 http://cds.cern.ch/record/2699586 |
work_keys_str_mv | AT vlimantjeanroch largescaledistributedtrainingappliedtogenerativeadversarialnetworksforcalorimetersimulation AT pantaleofelice largescaledistributedtrainingappliedtogenerativeadversarialnetworksforcalorimetersimulation AT pierinimaurizio largescaledistributedtrainingappliedtogenerativeadversarialnetworksforcalorimetersimulation AT loncarvladimir largescaledistributedtrainingappliedtogenerativeadversarialnetworksforcalorimetersimulation AT vallecorsasofia largescaledistributedtrainingappliedtogenerativeadversarialnetworksforcalorimetersimulation AT andersondustin largescaledistributedtrainingappliedtogenerativeadversarialnetworksforcalorimetersimulation AT nguyenthong largescaledistributedtrainingappliedtogenerativeadversarialnetworksforcalorimetersimulation AT zlokapaalexander largescaledistributedtrainingappliedtogenerativeadversarialnetworksforcalorimetersimulation |