Cargando…

LHCb computing in Run II and its evolution towards Run III

his contribution reports on the experience of the LHCb computing team during LHC Run 2 and its preparation for Run 3. Furthermore a brief introduction on LHCbDIRAC, i.e. the tool to interface to the experiment distributed computing resources for its data processing and data management operations, is...

Descripción completa

Detalles Bibliográficos
Autor principal: Falabella, Antonio
Lenguaje:eng
Publicado: SISSA 2016
Materias:
Acceso en línea:https://dx.doi.org/10.22323/1.282.0191
http://cds.cern.ch/record/2287319
_version_ 1780956042920722432
author Falabella, Antonio
author_facet Falabella, Antonio
author_sort Falabella, Antonio
collection CERN
description his contribution reports on the experience of the LHCb computing team during LHC Run 2 and its preparation for Run 3. Furthermore a brief introduction on LHCbDIRAC, i.e. the tool to interface to the experiment distributed computing resources for its data processing and data management operations, is given. Run 2, which started in 2015, has already seen several changes in the data processing workflows of the experiment. Most notably the ability to align and calibrate the detector between two different stages of the data processing in the high level trigger farm, eliminating the need for a second pass processing of the data offline. In addition a fraction of the data is immediately reconstructed to its final physics format in the high level trigger and only this format is exported from the experiment site to the physics analysis. This concept have successfully been tested and will continue to be used for the rest of Run 2. Furthermore the distributed data processing has been improved with new concepts and technologies as well as adaptations to the computing model. In Run 3 the experiment will see a further increase of instantaneous luminosity and pileup leading to even higher data rates to be exported. The signal yield will further increase which will have impacts on the data processing model of the experiment and the ways how physicists will analyse data on distributed computing facilities. Also connected to the increased signal yield is the need to produce more Monte Carlo samples. The increase in CPU work cannot be absorbed by an increase in hardware resources. The changes needed in the data processing applications will be discussed in the area of multi-processor aware applications, changes in the scheduling framework of the physics algorithms and the changes in the experiment data event model to facilitate SIMD instructions.
id oai-inspirehep.net-1596469
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2016
publisher SISSA
record_format invenio
spelling oai-inspirehep.net-15964692019-10-15T15:21:02Zdoi:10.22323/1.282.0191http://cds.cern.ch/record/2287319engFalabella, AntonioLHCb computing in Run II and its evolution towards Run IIIComputing and Computershis contribution reports on the experience of the LHCb computing team during LHC Run 2 and its preparation for Run 3. Furthermore a brief introduction on LHCbDIRAC, i.e. the tool to interface to the experiment distributed computing resources for its data processing and data management operations, is given. Run 2, which started in 2015, has already seen several changes in the data processing workflows of the experiment. Most notably the ability to align and calibrate the detector between two different stages of the data processing in the high level trigger farm, eliminating the need for a second pass processing of the data offline. In addition a fraction of the data is immediately reconstructed to its final physics format in the high level trigger and only this format is exported from the experiment site to the physics analysis. This concept have successfully been tested and will continue to be used for the rest of Run 2. Furthermore the distributed data processing has been improved with new concepts and technologies as well as adaptations to the computing model. In Run 3 the experiment will see a further increase of instantaneous luminosity and pileup leading to even higher data rates to be exported. The signal yield will further increase which will have impacts on the data processing model of the experiment and the ways how physicists will analyse data on distributed computing facilities. Also connected to the increased signal yield is the need to produce more Monte Carlo samples. The increase in CPU work cannot be absorbed by an increase in hardware resources. The changes needed in the data processing applications will be discussed in the area of multi-processor aware applications, changes in the scheduling framework of the physics algorithms and the changes in the experiment data event model to facilitate SIMD instructions.SISSAoai:inspirehep.net:15964692016
spellingShingle Computing and Computers
Falabella, Antonio
LHCb computing in Run II and its evolution towards Run III
title LHCb computing in Run II and its evolution towards Run III
title_full LHCb computing in Run II and its evolution towards Run III
title_fullStr LHCb computing in Run II and its evolution towards Run III
title_full_unstemmed LHCb computing in Run II and its evolution towards Run III
title_short LHCb computing in Run II and its evolution towards Run III
title_sort lhcb computing in run ii and its evolution towards run iii
topic Computing and Computers
url https://dx.doi.org/10.22323/1.282.0191
http://cds.cern.ch/record/2287319
work_keys_str_mv AT falabellaantonio lhcbcomputinginruniianditsevolutiontowardsruniii