Cargando…
Evaluating Mixed-Precision Arithmetic for 3D Generative Adversarial Networks to Simulate High Energy Physics Detectors
Several hardware companies are proposing native Brain Float 16-bit (BF16) support for neural network training. The usage of Mixed Precision (MP) arithmetic with floating-point 32-bit (FP32) and 16-bit half-precision aims at improving memory and floating-point operations throughput, allowing faster t...
Autores principales: | Ríos, John Osorio, Armejach, Adrià, Khattak, Gulrukh, Petit, Eric, Vallecorsa, Sofia, Casas, Marc |
---|---|
Lenguaje: | eng |
Publicado: |
2020
|
Acceso en línea: | https://dx.doi.org/10.1109/ICMLA51294.2020.00017 http://cds.cern.ch/record/2759602 |
Ejemplares similares
-
Evaluating POWER Architecture for Distributed Training of Generative Adversarial Networks
por: Hesam, Ahmad, et al.
Publicado: (2019) -
Generative Adversarial Networks for fast simulation
por: Carminati, Federico, et al.
Publicado: (2020) -
Distributed Training of Generative Adversarial Networks for Fast Simulation
por: Vallecorsa, Sofia, et al.
Publicado: (2019) -
High Energy Physics Calorimeter Detector Simulation Using Generative Adversarial Networks With Domain Related Constraints
por: Khattak, Gul Rukh, et al.
Publicado: (2021) -
Fast Simulation of a High Granularity Calorimeter by Generative Adversarial Networks
por: Khattak, Gul Rukh, et al.
Publicado: (2021)