Cargando…

Simulating HEP Workflows on Heterogeneous Architectures

The next generation of supercomputing facilities, such as Oak Ridge's Summit and Lawrence Livermore's Sierra, show an increasing use of GPGPUs and other accelerators in order to achieve their high FLOP counts. This trend will only grow with exascale facilities. In general, High Energy Phys...

Descripción completa

Detalles Bibliográficos
Autores principales: Leggett, Charles, Shapoval, Illya
Lenguaje:eng
Publicado: 2018
Materias:
Acceso en línea:http://cds.cern.ch/record/2645070
_version_ 1780960425512271872
author Leggett, Charles
Shapoval, Illya
author_facet Leggett, Charles
Shapoval, Illya
author_sort Leggett, Charles
collection CERN
description The next generation of supercomputing facilities, such as Oak Ridge's Summit and Lawrence Livermore's Sierra, show an increasing use of GPGPUs and other accelerators in order to achieve their high FLOP counts. This trend will only grow with exascale facilities. In general, High Energy Physics computing workflows have made little use of GPUs due to the relatively small fraction of kernels that run efficiently on GPUs, and the expense of rewriting code for rapidly evolving GPU hardware. However, the computing requirements for high-luminosity LHC are enormous, and it will become essential to be able to make use of supercomputing facilities that rely heavily on GPUs and other accelerator technologies. ATLAS has already developed an extension to AthenaMT, its multithreaded event processing framework, that enables the non-intrusive offloading of computations to external accelerator resources, and is developing strategies to schedule the offloading efficiently. Before investing heavily in writing many kernels, we need to better understand the performance metrics and throughput bounds of the workflows with various accelerator configurations. This can be done by simulating the workflows, using real metrics for task interdependencies and timing, as we vary fractions of offloaded tasks, latencies, data conversion speeds, memory bandwidths, and accelerator offloading parameters such as CPU/GPU ratios and speeds. We present the results of these studies, which will be instrumental in directing effort to make the ATLAS framework, kernels and workflows run efficiently on exascale facilities.
id cern-2645070
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2018
record_format invenio
spelling cern-26450702019-09-30T06:29:59Zhttp://cds.cern.ch/record/2645070engLeggett, CharlesShapoval, IllyaSimulating HEP Workflows on Heterogeneous ArchitecturesParticle Physics - ExperimentThe next generation of supercomputing facilities, such as Oak Ridge's Summit and Lawrence Livermore's Sierra, show an increasing use of GPGPUs and other accelerators in order to achieve their high FLOP counts. This trend will only grow with exascale facilities. In general, High Energy Physics computing workflows have made little use of GPUs due to the relatively small fraction of kernels that run efficiently on GPUs, and the expense of rewriting code for rapidly evolving GPU hardware. However, the computing requirements for high-luminosity LHC are enormous, and it will become essential to be able to make use of supercomputing facilities that rely heavily on GPUs and other accelerator technologies. ATLAS has already developed an extension to AthenaMT, its multithreaded event processing framework, that enables the non-intrusive offloading of computations to external accelerator resources, and is developing strategies to schedule the offloading efficiently. Before investing heavily in writing many kernels, we need to better understand the performance metrics and throughput bounds of the workflows with various accelerator configurations. This can be done by simulating the workflows, using real metrics for task interdependencies and timing, as we vary fractions of offloaded tasks, latencies, data conversion speeds, memory bandwidths, and accelerator offloading parameters such as CPU/GPU ratios and speeds. We present the results of these studies, which will be instrumental in directing effort to make the ATLAS framework, kernels and workflows run efficiently on exascale facilities.ATL-SOFT-SLIDE-2018-975oai:cds.cern.ch:26450702018-10-26
spellingShingle Particle Physics - Experiment
Leggett, Charles
Shapoval, Illya
Simulating HEP Workflows on Heterogeneous Architectures
title Simulating HEP Workflows on Heterogeneous Architectures
title_full Simulating HEP Workflows on Heterogeneous Architectures
title_fullStr Simulating HEP Workflows on Heterogeneous Architectures
title_full_unstemmed Simulating HEP Workflows on Heterogeneous Architectures
title_short Simulating HEP Workflows on Heterogeneous Architectures
title_sort simulating hep workflows on heterogeneous architectures
topic Particle Physics - Experiment
url http://cds.cern.ch/record/2645070
work_keys_str_mv AT leggettcharles simulatinghepworkflowsonheterogeneousarchitectures
AT shapovalillya simulatinghepworkflowsonheterogeneousarchitectures