Cargando…
Evaluating Mixed-Precision Arithmetic for 3D Generative Adversarial Networks to Simulate High Energy Physics Detectors
Several hardware companies are proposing native Brain Float 16-bit (BF16) support for neural network training. The usage of Mixed Precision (MP) arithmetic with floating-point 32-bit (FP32) and 16-bit half-precision aims at improving memory and floating-point operations throughput, allowing faster t...
Autores principales: | , , , , , |
---|---|
Lenguaje: | eng |
Publicado: |
2020
|
Acceso en línea: | https://dx.doi.org/10.1109/ICMLA51294.2020.00017 http://cds.cern.ch/record/2759602 |