Cargando…
LHCb computing in Run II and its evolution towards Run III
his contribution reports on the experience of the LHCb computing team during LHC Run 2 and its preparation for Run 3. Furthermore a brief introduction on LHCbDIRAC, i.e. the tool to interface to the experiment distributed computing resources for its data processing and data management operations, is...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
SISSA
2016
|
Materias: | |
Acceso en línea: | https://dx.doi.org/10.22323/1.282.0191 http://cds.cern.ch/record/2287319 |
Sumario: | his contribution reports on the experience of the LHCb computing team during LHC Run 2 and its preparation for Run 3. Furthermore a brief introduction on LHCbDIRAC, i.e. the tool to interface to the experiment distributed computing resources for its data processing and data management operations, is given. Run 2, which started in 2015, has already seen several changes in the data processing workflows of the experiment. Most notably the ability to align and calibrate the detector between two different stages of the data processing in the high level trigger farm, eliminating the need for a second pass processing of the data offline. In addition a fraction of the data is immediately reconstructed to its final physics format in the high level trigger and only this format is exported from the experiment site to the physics analysis. This concept have successfully been tested and will continue to be used for the rest of Run 2. Furthermore the distributed data processing has been improved with new concepts and technologies as well as adaptations to the computing model. In Run 3 the experiment will see a further increase of instantaneous luminosity and pileup leading to even higher data rates to be exported. The signal yield will further increase which will have impacts on the data processing model of the experiment and the ways how physicists will analyse data on distributed computing facilities. Also connected to the increased signal yield is the need to produce more Monte Carlo samples. The increase in CPU work cannot be absorbed by an increase in hardware resources. The changes needed in the data processing applications will be discussed in the area of multi-processor aware applications, changes in the scheduling framework of the physics algorithms and the changes in the experiment data event model to facilitate SIMD instructions. |
---|