Cargando…

Running ATLAS Simulations on HPCs

Experiments at the Large Hadron Collider require data intensive processing and traditionally do not use HPCs. Till a few years ago, the ATLAS experiment at the LHC was using less than 10 million hours of walltime at HPCs annually, while over an exabyte of data was processed annually on the grid. A l...

Descripción completa

Detalles Bibliográficos
Autor principal: De, Kaushik
Lenguaje:eng
Publicado: 2018
Materias:
Acceso en línea:http://cds.cern.ch/record/2628408
Descripción
Sumario:Experiments at the Large Hadron Collider require data intensive processing and traditionally do not use HPCs. Till a few years ago, the ATLAS experiment at the LHC was using less than 10 million hours of walltime at HPCs annually, while over an exabyte of data was processed annually on the grid. A large increase in data volume and data complexity at the LHC in 2016 created a shortage of computing cycles, and HPC systems stepped in to help the LHC achieve its physics goals. Currently, ATLAS is on schedule to utilize about half a billion hours of walltime usage on HPCs during the past 12 months. This is a huge increase in usage over a few years - requiring numerous innovations and improvements. This talk will describe the use of HPCs worldwide by ATLAS, primarily for simulations, and specifically focus on how the HPCs are integrated with the workflow management and data management systems, and the lessons learned during this integration.