Cargando…

Data-Parallel Training of Generative Adversarial Networks on HPC Systems for HEP Simulations

In the field of High Energy Physics (HEP), simulating the interaction of particle detector materials is a compute-intensive task, that currently uses 50% of the computing resources globally available as part of the Worldwide LCH Computing Grid (WLCG). Since some level of approximation is acceptable,...

Descripción completa

Detalles Bibliográficos
Autores principales: Vallecorsa, Sofia, Moise, Diana, Carminati, Federico, Khattak, Gul Rukh
Lenguaje:eng
Publicado: 2018
Materias:
Acceso en línea:https://dx.doi.org/10.1109/hipc.2018.00026
http://cds.cern.ch/record/2838905
Descripción
Sumario:In the field of High Energy Physics (HEP), simulating the interaction of particle detector materials is a compute-intensive task, that currently uses 50% of the computing resources globally available as part of the Worldwide LCH Computing Grid (WLCG). Since some level of approximation is acceptable, it is possible to implement fast simulation simplified models that have the advantage of being less computationally intensive. In this work, we present a fast simulation approach based on Generative Adversarial Networks (GANs). The model consists of a conditional generative network that describes the detector response and a discriminative network; both networks are trained in adversarial manner. The adversarial training process is computationally intensive and the application of a distributed approach is not straightforward. We rely on the MPI-based Cray Machine Learning Plugin to efficiently train the GAN over multiple nodes and GPGPUs. We report preliminary results on the accuracy of the generated samples and on the scaling of the time to solution. We demonstrate how HPC systems could be utilized to optimize this kind of models, on account of their large computational power and highly efficient interconnect.