Cargando…

Recent and planned changes to the LHCb computing model

The LHCb experiment [1] has taken data between December 2009 and February 2013. The data taking conditions and trigger rate were adjusted several times during this period to make optimal use of the luminosity delivered by the LHC and to extend the physics potential of the experiment. By 2012, LHCb w...

Descripción completa

Detalles Bibliográficos
Autores principales: Cattaneo, Marco, Charpentier, P, Clarke, P, Roiser, S
Publicado: 2014
Materias:
Acceso en línea:https://dx.doi.org/10.1088/1742-6596/513/3/032017
http://cds.cern.ch/record/2055704
_version_ 1780948309832105984
author Cattaneo, Marco
Charpentier, P
Clarke, P
Roiser, S
author_facet Cattaneo, Marco
Charpentier, P
Clarke, P
Roiser, S
author_sort Cattaneo, Marco
collection CERN
description The LHCb experiment [1] has taken data between December 2009 and February 2013. The data taking conditions and trigger rate were adjusted several times during this period to make optimal use of the luminosity delivered by the LHC and to extend the physics potential of the experiment. By 2012, LHCb was taking data at twice the instantaneous luminosity and 2.5 times the high level trigger rate than originally foreseen. This represents a considerable increase in the amount of data which had to be handled compared to the original Computing Model from 2005, both in terms of compute power and in terms of storage. In this paper we describe the changes that have taken place in the LHCb computing model during the last 2 years of data taking to process and analyse the increased data rates within limited computing resources. In particular a quite original change was introduced at the end of 2011 when LHCb started to use for reprocessing compute power that was not co-located with the RAW data, namely using Tier2 sites and private resources. The flexibility of the LHCbDirac Grid interware allowed easy inclusion of these additional resources that in 2012 provided 45% of the compute power for the end-of-year reprocessing. Several changes were also implemented in the Data Management model in order to limit the need for accessing data from tape, as well as in the data placement policy in order to cope with a large imbalance in storage resources at Tier1 sites. We also discuss changes that are being implemented during the LHC Long Shutdown 1 (LS1) to prepare for a further doubling of the data rate when the LHC restarts at a higher energy in 2015.
id cern-2055704
institution Organización Europea para la Investigación Nuclear
publishDate 2014
record_format invenio
spelling cern-20557042022-08-17T13:32:45Zdoi:10.1088/1742-6596/513/3/032017http://cds.cern.ch/record/2055704Cattaneo, MarcoCharpentier, PClarke, PRoiser, SRecent and planned changes to the LHCb computing modelComputing and ComputersThe LHCb experiment [1] has taken data between December 2009 and February 2013. The data taking conditions and trigger rate were adjusted several times during this period to make optimal use of the luminosity delivered by the LHC and to extend the physics potential of the experiment. By 2012, LHCb was taking data at twice the instantaneous luminosity and 2.5 times the high level trigger rate than originally foreseen. This represents a considerable increase in the amount of data which had to be handled compared to the original Computing Model from 2005, both in terms of compute power and in terms of storage. In this paper we describe the changes that have taken place in the LHCb computing model during the last 2 years of data taking to process and analyse the increased data rates within limited computing resources. In particular a quite original change was introduced at the end of 2011 when LHCb started to use for reprocessing compute power that was not co-located with the RAW data, namely using Tier2 sites and private resources. The flexibility of the LHCbDirac Grid interware allowed easy inclusion of these additional resources that in 2012 provided 45% of the compute power for the end-of-year reprocessing. Several changes were also implemented in the Data Management model in order to limit the need for accessing data from tape, as well as in the data placement policy in order to cope with a large imbalance in storage resources at Tier1 sites. We also discuss changes that are being implemented during the LHC Long Shutdown 1 (LS1) to prepare for a further doubling of the data rate when the LHC restarts at a higher energy in 2015.oai:cds.cern.ch:20557042014
spellingShingle Computing and Computers
Cattaneo, Marco
Charpentier, P
Clarke, P
Roiser, S
Recent and planned changes to the LHCb computing model
title Recent and planned changes to the LHCb computing model
title_full Recent and planned changes to the LHCb computing model
title_fullStr Recent and planned changes to the LHCb computing model
title_full_unstemmed Recent and planned changes to the LHCb computing model
title_short Recent and planned changes to the LHCb computing model
title_sort recent and planned changes to the lhcb computing model
topic Computing and Computers
url https://dx.doi.org/10.1088/1742-6596/513/3/032017
http://cds.cern.ch/record/2055704
work_keys_str_mv AT cattaneomarco recentandplannedchangestothelhcbcomputingmodel
AT charpentierp recentandplannedchangestothelhcbcomputingmodel
AT clarkep recentandplannedchangestothelhcbcomputingmodel
AT roisers recentandplannedchangestothelhcbcomputingmodel