Cargando…

Running ATLAS Simulations on HPCs

Experiments at the Large Hadron Collider require data intensive processing and traditionally do not use HPCs. Till a few years ago, the ATLAS experiment at the LHC was using less than 10 million hours of walltime at HPCs annually, while over an exabyte of data was processed annually on the grid. A l...

Descripción completa

Detalles Bibliográficos
Autor principal: De, Kaushik
Lenguaje:eng
Publicado: 2018
Materias:
Acceso en línea:http://cds.cern.ch/record/2628408
_version_ 1780959166890770432
author De, Kaushik
author_facet De, Kaushik
author_sort De, Kaushik
collection CERN
description Experiments at the Large Hadron Collider require data intensive processing and traditionally do not use HPCs. Till a few years ago, the ATLAS experiment at the LHC was using less than 10 million hours of walltime at HPCs annually, while over an exabyte of data was processed annually on the grid. A large increase in data volume and data complexity at the LHC in 2016 created a shortage of computing cycles, and HPC systems stepped in to help the LHC achieve its physics goals. Currently, ATLAS is on schedule to utilize about half a billion hours of walltime usage on HPCs during the past 12 months. This is a huge increase in usage over a few years - requiring numerous innovations and improvements. This talk will describe the use of HPCs worldwide by ATLAS, primarily for simulations, and specifically focus on how the HPCs are integrated with the workflow management and data management systems, and the lessons learned during this integration.
id cern-2628408
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2018
record_format invenio
spelling cern-26284082019-09-30T06:29:59Zhttp://cds.cern.ch/record/2628408engDe, KaushikRunning ATLAS Simulations on HPCsParticle Physics - ExperimentExperiments at the Large Hadron Collider require data intensive processing and traditionally do not use HPCs. Till a few years ago, the ATLAS experiment at the LHC was using less than 10 million hours of walltime at HPCs annually, while over an exabyte of data was processed annually on the grid. A large increase in data volume and data complexity at the LHC in 2016 created a shortage of computing cycles, and HPC systems stepped in to help the LHC achieve its physics goals. Currently, ATLAS is on schedule to utilize about half a billion hours of walltime usage on HPCs during the past 12 months. This is a huge increase in usage over a few years - requiring numerous innovations and improvements. This talk will describe the use of HPCs worldwide by ATLAS, primarily for simulations, and specifically focus on how the HPCs are integrated with the workflow management and data management systems, and the lessons learned during this integration.ATL-SOFT-SLIDE-2018-456oai:cds.cern.ch:26284082018-07-04
spellingShingle Particle Physics - Experiment
De, Kaushik
Running ATLAS Simulations on HPCs
title Running ATLAS Simulations on HPCs
title_full Running ATLAS Simulations on HPCs
title_fullStr Running ATLAS Simulations on HPCs
title_full_unstemmed Running ATLAS Simulations on HPCs
title_short Running ATLAS Simulations on HPCs
title_sort running atlas simulations on hpcs
topic Particle Physics - Experiment
url http://cds.cern.ch/record/2628408
work_keys_str_mv AT dekaushik runningatlassimulationsonhpcs