Cargando…

ATLAS Distributed Computing in LHC Run2

The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more...

Descripción completa

Detalles Bibliográficos
Autor principal: Campana, Simone
Lenguaje:eng
Publicado: 2015
Materias:
Acceso en línea:https://dx.doi.org/10.1088/1742-6596/664/3/032004
http://cds.cern.ch/record/2016337
_version_ 1780946696319008768
author Campana, Simone
author_facet Campana, Simone
author_sort Campana, Simone
collection CERN
description The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, the overview of the operational experience of the new system and its evolution is presented.
id cern-2016337
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2015
record_format invenio
spelling cern-20163372022-08-10T12:54:39Zdoi:10.1088/1742-6596/664/3/032004http://cds.cern.ch/record/2016337engCampana, SimoneATLAS Distributed Computing in LHC Run2Particle Physics - ExperimentThe ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, the overview of the operational experience of the new system and its evolution is presented.ATL-SOFT-PROC-2015-030oai:cds.cern.ch:20163372015-05-14
spellingShingle Particle Physics - Experiment
Campana, Simone
ATLAS Distributed Computing in LHC Run2
title ATLAS Distributed Computing in LHC Run2
title_full ATLAS Distributed Computing in LHC Run2
title_fullStr ATLAS Distributed Computing in LHC Run2
title_full_unstemmed ATLAS Distributed Computing in LHC Run2
title_short ATLAS Distributed Computing in LHC Run2
title_sort atlas distributed computing in lhc run2
topic Particle Physics - Experiment
url https://dx.doi.org/10.1088/1742-6596/664/3/032004
http://cds.cern.ch/record/2016337
work_keys_str_mv AT campanasimone atlasdistributedcomputinginlhcrun2