Cargando…

Distributed processing and analysis of ATLAS experimental data

The ATLAS experiment is taking data steadily since Autumn 2009, and collected so far over 5 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Comput...

Descripción completa

Detalles Bibliográficos
Autor principal: Barberis, D
Lenguaje:eng
Publicado: 2011
Materias:
Acceso en línea:http://cds.cern.ch/record/1397884
_version_ 1780923542893756416
author Barberis, D
author_facet Barberis, D
author_sort Barberis, D
collection CERN
description The ATLAS experiment is taking data steadily since Autumn 2009, and collected so far over 5 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the first two years of operation.
id cern-1397884
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2011
record_format invenio
spelling cern-13978842019-09-30T06:29:59Zhttp://cds.cern.ch/record/1397884engBarberis, DDistributed processing and analysis of ATLAS experimental dataDetectors and Experimental TechniquesThe ATLAS experiment is taking data steadily since Autumn 2009, and collected so far over 5 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the first two years of operation.ATL-SOFT-PROC-2011-045oai:cds.cern.ch:13978842011-11-11
spellingShingle Detectors and Experimental Techniques
Barberis, D
Distributed processing and analysis of ATLAS experimental data
title Distributed processing and analysis of ATLAS experimental data
title_full Distributed processing and analysis of ATLAS experimental data
title_fullStr Distributed processing and analysis of ATLAS experimental data
title_full_unstemmed Distributed processing and analysis of ATLAS experimental data
title_short Distributed processing and analysis of ATLAS experimental data
title_sort distributed processing and analysis of atlas experimental data
topic Detectors and Experimental Techniques
url http://cds.cern.ch/record/1397884
work_keys_str_mv AT barberisd distributedprocessingandanalysisofatlasexperimentaldata