Cargando…

Providing the computing and data to the physicists

The ATLAS experiment at CERN uses more than 150 sites in the WLCG to process and analyze data recorded by the LHC. The grid workflow system PanDA routinely utilizes more than 400 thousand CPU cores of those sites. The data management system Rucio manages about half an exabyte of detector and simulat...

Descripción completa

Detalles Bibliográficos
Autor principal: Svatos, Michal
Lenguaje:eng
Publicado: 2020
Materias:
Acceso en línea:http://cds.cern.ch/record/2723830
_version_ 1780965955150544896
author Svatos, Michal
author_facet Svatos, Michal
author_sort Svatos, Michal
collection CERN
description The ATLAS experiment at CERN uses more than 150 sites in the WLCG to process and analyze data recorded by the LHC. The grid workflow system PanDA routinely utilizes more than 400 thousand CPU cores of those sites. The data management system Rucio manages about half an exabyte of detector and simulation data distributed among these sites. With the ever-improving performance of the LHC, more data is expected to come and the ATLAS computing needs to evolve and adapt to that. Disk space will become more scarce which should be alleviated by more active usage of tapes and caches and new smaller data formats. Grid jobs can run not just on the WLCG sites but also on opportunistic resources, i.e. clouds and HPCs. A new grafana-based monitoring system facilitates operation of the ATLAS computing. This presentation will review and explain the improvements put in place for the upcoming Run 3 and will provide an outlook to the many improvements needed for the HL-LHC.
id cern-2723830
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2020
record_format invenio
spelling cern-27238302021-04-29T12:36:16Zhttp://cds.cern.ch/record/2723830engSvatos, MichalProviding the computing and data to the physicistsParticle Physics - ExperimentThe ATLAS experiment at CERN uses more than 150 sites in the WLCG to process and analyze data recorded by the LHC. The grid workflow system PanDA routinely utilizes more than 400 thousand CPU cores of those sites. The data management system Rucio manages about half an exabyte of detector and simulation data distributed among these sites. With the ever-improving performance of the LHC, more data is expected to come and the ATLAS computing needs to evolve and adapt to that. Disk space will become more scarce which should be alleviated by more active usage of tapes and caches and new smaller data formats. Grid jobs can run not just on the WLCG sites but also on opportunistic resources, i.e. clouds and HPCs. A new grafana-based monitoring system facilitates operation of the ATLAS computing. This presentation will review and explain the improvements put in place for the upcoming Run 3 and will provide an outlook to the many improvements needed for the HL-LHC.ATL-SOFT-SLIDE-2020-234oai:cds.cern.ch:27238302020-07-15
spellingShingle Particle Physics - Experiment
Svatos, Michal
Providing the computing and data to the physicists
title Providing the computing and data to the physicists
title_full Providing the computing and data to the physicists
title_fullStr Providing the computing and data to the physicists
title_full_unstemmed Providing the computing and data to the physicists
title_short Providing the computing and data to the physicists
title_sort providing the computing and data to the physicists
topic Particle Physics - Experiment
url http://cds.cern.ch/record/2723830
work_keys_str_mv AT svatosmichal providingthecomputinganddatatothephysicists