Cargando…

Evaluating Mixed-Precision Arithmetic for 3D Generative Adversarial Networks to Simulate High Energy Physics Detectors

Several hardware companies are proposing native Brain Float 16-bit (BF16) support for neural network training. The usage of Mixed Precision (MP) arithmetic with floating-point 32-bit (FP32) and 16-bit half-precision aims at improving memory and floating-point operations throughput, allowing faster t...

Descripción completa

Detalles Bibliográficos
Autores principales: Ríos, John Osorio, Armejach, Adrià, Khattak, Gulrukh, Petit, Eric, Vallecorsa, Sofia, Casas, Marc
Lenguaje:eng
Publicado: 2020
Acceso en línea:https://dx.doi.org/10.1109/ICMLA51294.2020.00017
http://cds.cern.ch/record/2759602
_version_ 1780970301503307776
author Ríos, John Osorio
Armejach, Adrià
Khattak, Gulrukh
Petit, Eric
Vallecorsa, Sofia
Casas, Marc
author_facet Ríos, John Osorio
Armejach, Adrià
Khattak, Gulrukh
Petit, Eric
Vallecorsa, Sofia
Casas, Marc
author_sort Ríos, John Osorio
collection CERN
description Several hardware companies are proposing native Brain Float 16-bit (BF16) support for neural network training. The usage of Mixed Precision (MP) arithmetic with floating-point 32-bit (FP32) and 16-bit half-precision aims at improving memory and floating-point operations throughput, allowing faster training of bigger models. This paper proposes a binary analysis tool enabling the emulation of lower precision numerical formats in Neural Network implementation without the need for hardware support. This tool is used to analyze BF16 usage in the training phase of a 3D Generative Adversarial Network (3DGAN) simulating High Energy Physics detectors. The binary tool allows us to confirm that BF16 can provide results with similar accuracy as the full-precision 3DGAN version and the costly reference numerical simulation using double precision arithmetic.
id oai-inspirehep.net-1853931
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2020
record_format invenio
spelling oai-inspirehep.net-18539312022-01-14T14:55:36Zdoi:10.1109/ICMLA51294.2020.00017http://cds.cern.ch/record/2759602engRíos, John OsorioArmejach, AdriàKhattak, GulrukhPetit, EricVallecorsa, SofiaCasas, MarcEvaluating Mixed-Precision Arithmetic for 3D Generative Adversarial Networks to Simulate High Energy Physics DetectorsSeveral hardware companies are proposing native Brain Float 16-bit (BF16) support for neural network training. The usage of Mixed Precision (MP) arithmetic with floating-point 32-bit (FP32) and 16-bit half-precision aims at improving memory and floating-point operations throughput, allowing faster training of bigger models. This paper proposes a binary analysis tool enabling the emulation of lower precision numerical formats in Neural Network implementation without the need for hardware support. This tool is used to analyze BF16 usage in the training phase of a 3D Generative Adversarial Network (3DGAN) simulating High Energy Physics detectors. The binary tool allows us to confirm that BF16 can provide results with similar accuracy as the full-precision 3DGAN version and the costly reference numerical simulation using double precision arithmetic.oai:inspirehep.net:18539312020
spellingShingle Ríos, John Osorio
Armejach, Adrià
Khattak, Gulrukh
Petit, Eric
Vallecorsa, Sofia
Casas, Marc
Evaluating Mixed-Precision Arithmetic for 3D Generative Adversarial Networks to Simulate High Energy Physics Detectors
title Evaluating Mixed-Precision Arithmetic for 3D Generative Adversarial Networks to Simulate High Energy Physics Detectors
title_full Evaluating Mixed-Precision Arithmetic for 3D Generative Adversarial Networks to Simulate High Energy Physics Detectors
title_fullStr Evaluating Mixed-Precision Arithmetic for 3D Generative Adversarial Networks to Simulate High Energy Physics Detectors
title_full_unstemmed Evaluating Mixed-Precision Arithmetic for 3D Generative Adversarial Networks to Simulate High Energy Physics Detectors
title_short Evaluating Mixed-Precision Arithmetic for 3D Generative Adversarial Networks to Simulate High Energy Physics Detectors
title_sort evaluating mixed-precision arithmetic for 3d generative adversarial networks to simulate high energy physics detectors
url https://dx.doi.org/10.1109/ICMLA51294.2020.00017
http://cds.cern.ch/record/2759602
work_keys_str_mv AT riosjohnosorio evaluatingmixedprecisionarithmeticfor3dgenerativeadversarialnetworkstosimulatehighenergyphysicsdetectors
AT armejachadria evaluatingmixedprecisionarithmeticfor3dgenerativeadversarialnetworkstosimulatehighenergyphysicsdetectors
AT khattakgulrukh evaluatingmixedprecisionarithmeticfor3dgenerativeadversarialnetworkstosimulatehighenergyphysicsdetectors
AT petiteric evaluatingmixedprecisionarithmeticfor3dgenerativeadversarialnetworkstosimulatehighenergyphysicsdetectors
AT vallecorsasofia evaluatingmixedprecisionarithmeticfor3dgenerativeadversarialnetworkstosimulatehighenergyphysicsdetectors
AT casasmarc evaluatingmixedprecisionarithmeticfor3dgenerativeadversarialnetworkstosimulatehighenergyphysicsdetectors