Cargando…

Geant4 Computing Performance Benchmarking and Monitoring

Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is im...

Descripción completa

Detalles Bibliográficos
Autores principales: Dotti, Andrea, Elvira, V Daniel, Folger, Gunter, Genser, Krzysztof, Jun, Soon Yung, Kowalkowski, James B, Paterno, Marc
Lenguaje:eng
Publicado: 2015
Materias:
Acceso en línea:https://dx.doi.org/10.1088/1742-6596/664/6/062021
http://cds.cern.ch/record/2134601
_version_ 1780949913517948928
author Dotti, Andrea
Elvira, V Daniel
Folger, Gunter
Genser, Krzysztof
Jun, Soon Yung
Kowalkowski, James B
Paterno, Marc
author_facet Dotti, Andrea
Elvira, V Daniel
Folger, Gunter
Genser, Krzysztof
Jun, Soon Yung
Kowalkowski, James B
Paterno, Marc
author_sort Dotti, Andrea
collection CERN
description Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. The scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.
id oai-inspirehep.net-1413942
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2015
record_format invenio
spelling oai-inspirehep.net-14139422022-08-10T13:00:59Zdoi:10.1088/1742-6596/664/6/062021http://cds.cern.ch/record/2134601engDotti, AndreaElvira, V DanielFolger, GunterGenser, KrzysztofJun, Soon YungKowalkowski, James BPaterno, MarcGeant4 Computing Performance Benchmarking and MonitoringComputing and ComputersPerformance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. The scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.FERMILAB-CONF-15-599-CDoai:inspirehep.net:14139422015
spellingShingle Computing and Computers
Dotti, Andrea
Elvira, V Daniel
Folger, Gunter
Genser, Krzysztof
Jun, Soon Yung
Kowalkowski, James B
Paterno, Marc
Geant4 Computing Performance Benchmarking and Monitoring
title Geant4 Computing Performance Benchmarking and Monitoring
title_full Geant4 Computing Performance Benchmarking and Monitoring
title_fullStr Geant4 Computing Performance Benchmarking and Monitoring
title_full_unstemmed Geant4 Computing Performance Benchmarking and Monitoring
title_short Geant4 Computing Performance Benchmarking and Monitoring
title_sort geant4 computing performance benchmarking and monitoring
topic Computing and Computers
url https://dx.doi.org/10.1088/1742-6596/664/6/062021
http://cds.cern.ch/record/2134601
work_keys_str_mv AT dottiandrea geant4computingperformancebenchmarkingandmonitoring
AT elviravdaniel geant4computingperformancebenchmarkingandmonitoring
AT folgergunter geant4computingperformancebenchmarkingandmonitoring
AT genserkrzysztof geant4computingperformancebenchmarkingandmonitoring
AT junsoonyung geant4computingperformancebenchmarkingandmonitoring
AT kowalkowskijamesb geant4computingperformancebenchmarkingandmonitoring
AT paternomarc geant4computingperformancebenchmarkingandmonitoring