Cargando…
ATLAS Distributed Data Analysis: challenges and performance
In the LHC operations era the key goal is to analyse the results of the collisions of high-energy particles as a way of probing the fundamental forces of nature. The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of data per year. The ATLAS Computing M...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2015
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/2004869 |
_version_ | 1780946132377010176 |
---|---|
author | Fassi, Farida |
author_facet | Fassi, Farida |
author_sort | Fassi, Farida |
collection | CERN |
description | In the LHC operations era the key goal is to analyse the results of the collisions of high-energy particles as a way of probing the fundamental forces of nature. The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of data per year. The ATLAS Computing Model was designed around the concepts of Grid Computing. Large data volumes from the detectors and simulations require a large number of CPUs and storage space for data processing. To cope with this challenge a global network known as the Worldwide LHC Computing Grid (WLCG) was built. This is the most sophisticated data taking and analysis system ever built. ATLAS accumulated more than 140 PB of data between 2009 and 2014. To analyse these data ATLAS developed, deployed and now operates a mature and stable distributed analysis (DA) service on the WLCG. The service is actively used: more than half a million user jobs run daily on DA resources, submitted by more than 1500 ATLAS physicists. A significant reliability of the DA system during the first run and the following shutdown period has been reached due to the continuous automatic validation of the grid sites against a set of standard tests, and a dedicated team of expert shifters who provide user support and communicate user problems to the sites in a more efficient way. In this report we review the state of the DA system, emphasis will be put on the DA infrastructure changes to cope with the second LHC run challenges (starting in 2015) to improve the analysis workflows, including a new analysis model. A special care will be devoted to discuss the ATLAS Distributed Analysis support facility (DAST). |
id | cern-2004869 |
institution | Organización Europea para la Investigación Nuclear |
language | eng |
publishDate | 2015 |
record_format | invenio |
spelling | cern-20048692019-09-30T06:29:59Zhttp://cds.cern.ch/record/2004869engFassi, FaridaATLAS Distributed Data Analysis: challenges and performanceParticle Physics - ExperimentIn the LHC operations era the key goal is to analyse the results of the collisions of high-energy particles as a way of probing the fundamental forces of nature. The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of data per year. The ATLAS Computing Model was designed around the concepts of Grid Computing. Large data volumes from the detectors and simulations require a large number of CPUs and storage space for data processing. To cope with this challenge a global network known as the Worldwide LHC Computing Grid (WLCG) was built. This is the most sophisticated data taking and analysis system ever built. ATLAS accumulated more than 140 PB of data between 2009 and 2014. To analyse these data ATLAS developed, deployed and now operates a mature and stable distributed analysis (DA) service on the WLCG. The service is actively used: more than half a million user jobs run daily on DA resources, submitted by more than 1500 ATLAS physicists. A significant reliability of the DA system during the first run and the following shutdown period has been reached due to the continuous automatic validation of the grid sites against a set of standard tests, and a dedicated team of expert shifters who provide user support and communicate user problems to the sites in a more efficient way. In this report we review the state of the DA system, emphasis will be put on the DA infrastructure changes to cope with the second LHC run challenges (starting in 2015) to improve the analysis workflows, including a new analysis model. A special care will be devoted to discuss the ATLAS Distributed Analysis support facility (DAST).ATL-SOFT-PROC-2015-001oai:cds.cern.ch:20048692015-03-27 |
spellingShingle | Particle Physics - Experiment Fassi, Farida ATLAS Distributed Data Analysis: challenges and performance |
title | ATLAS Distributed Data Analysis: challenges and performance |
title_full | ATLAS Distributed Data Analysis: challenges and performance |
title_fullStr | ATLAS Distributed Data Analysis: challenges and performance |
title_full_unstemmed | ATLAS Distributed Data Analysis: challenges and performance |
title_short | ATLAS Distributed Data Analysis: challenges and performance |
title_sort | atlas distributed data analysis: challenges and performance |
topic | Particle Physics - Experiment |
url | http://cds.cern.ch/record/2004869 |
work_keys_str_mv | AT fassifarida atlasdistributeddataanalysischallengesandperformance |