Cargando…

The ATLAS Distributed Computing: the challenges of the future

The ATLAS experiment has collected more than 25 fb-1 of data since LHC has started it's operation in 2010. Tens of petabytes of collision events and Monte-Carlo simulations are stored over more than 150 computing centers all over the world. The data processing is performed on grid sites providi...

Descripción completa

Detalles Bibliográficos
Autor principal: Sakamoto, H
Lenguaje:eng
Publicado: 2013
Materias:
Acceso en línea:http://cds.cern.ch/record/1529951
_version_ 1780929559611310080
author Sakamoto, H
author_facet Sakamoto, H
author_sort Sakamoto, H
collection CERN
description The ATLAS experiment has collected more than 25 fb-1 of data since LHC has started it's operation in 2010. Tens of petabytes of collision events and Monte-Carlo simulations are stored over more than 150 computing centers all over the world. The data processing is performed on grid sites providing more than 100.000 computing cores and orchestrated by the ATLAS in-house developed job and data management services. The discovery of the Higgs-like boson in 2012 would not be possible without the excellent performance of the ATLAS Distributed Computing. The future ATLAS experiment operation with increased LHC beam energy and luminosity foreseen for 2014 imposes a significant increase in computing demands the ATLAS Distributed Computing needs to satisfy. Therefore, a development of the new data-processing, storage and data-distribution systems has been started to efficiently use the computing resources exploiting current and future technologies of distributed computing.
id cern-1529951
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2013
record_format invenio
spelling cern-15299512019-09-30T06:29:59Zhttp://cds.cern.ch/record/1529951engSakamoto, HThe ATLAS Distributed Computing: the challenges of the futureDetectors and Experimental TechniquesThe ATLAS experiment has collected more than 25 fb-1 of data since LHC has started it's operation in 2010. Tens of petabytes of collision events and Monte-Carlo simulations are stored over more than 150 computing centers all over the world. The data processing is performed on grid sites providing more than 100.000 computing cores and orchestrated by the ATLAS in-house developed job and data management services. The discovery of the Higgs-like boson in 2012 would not be possible without the excellent performance of the ATLAS Distributed Computing. The future ATLAS experiment operation with increased LHC beam energy and luminosity foreseen for 2014 imposes a significant increase in computing demands the ATLAS Distributed Computing needs to satisfy. Therefore, a development of the new data-processing, storage and data-distribution systems has been started to efficiently use the computing resources exploiting current and future technologies of distributed computing.ATL-SOFT-SLIDE-2013-094oai:cds.cern.ch:15299512013-03-20
spellingShingle Detectors and Experimental Techniques
Sakamoto, H
The ATLAS Distributed Computing: the challenges of the future
title The ATLAS Distributed Computing: the challenges of the future
title_full The ATLAS Distributed Computing: the challenges of the future
title_fullStr The ATLAS Distributed Computing: the challenges of the future
title_full_unstemmed The ATLAS Distributed Computing: the challenges of the future
title_short The ATLAS Distributed Computing: the challenges of the future
title_sort atlas distributed computing: the challenges of the future
topic Detectors and Experimental Techniques
url http://cds.cern.ch/record/1529951
work_keys_str_mv AT sakamotoh theatlasdistributedcomputingthechallengesofthefuture
AT sakamotoh atlasdistributedcomputingthechallengesofthefuture