Cargando…

Scalable monitoring data processing for the LHCb software trigger

The LHCb High Level Trigger (HLT) is split in two stages. HLT1 is synchronous with collisions delivered by the LHC and writes its output to a local disk buffer, which is asynchronously processed by HLT2. Efficient monitoring of the data being processed by the application is crucial to promptly diagn...

Descripción completa

Detalles Bibliográficos
Autores principales: Petrucci, Stefano, Matev, Rosen, Aaij, Roel
Lenguaje:eng
Publicado: 2020
Materias:
Acceso en línea:https://dx.doi.org/10.1051/epjconf/202024501039
http://cds.cern.ch/record/2754094
_version_ 1780969438310301696
author Petrucci, Stefano
Matev, Rosen
Aaij, Roel
author_facet Petrucci, Stefano
Matev, Rosen
Aaij, Roel
author_sort Petrucci, Stefano
collection CERN
description The LHCb High Level Trigger (HLT) is split in two stages. HLT1 is synchronous with collisions delivered by the LHC and writes its output to a local disk buffer, which is asynchronously processed by HLT2. Efficient monitoring of the data being processed by the application is crucial to promptly diagnose detector or software problems. HLT2 consists of approximately 50000 processes and 4000 histograms are produced by each process. This results in 200 million histograms that need to be aggregated for each of up to a hundred data taking intervals that are being processed simultaneously. This paper presents the multi-level hierarchical architecture of the monitoring infrastructure put in place to achieve this. Network bandwidth is minimised by sending histogram increments and only exchanging metadata when necessary, using a custom lightweight protocol based on boost::serialize. The transport layer is implemented with ZeroMQ, which supports IPC and TCP communication, queue handling, asynchronous request/response and multipart messages. The persistent storage to ROOT is parallelized in order to cope with data arriving from a hundred of data taking intervals being processed simultaneously by HLT2. The performance and the scalability of the current system are presented. We demonstrate the feasibility of such an approach for the HLT1 use case, where real-time feedback and reliability of the infrastructure are crucial. In addition, a prototype of a high-level transport layer based on the stream-processing platform Apache Kafka is shown, which has several advantages over the lower-level ZeroMQ solution.
id oai-inspirehep.net-1832022
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2020
record_format invenio
spelling oai-inspirehep.net-18320222021-03-16T21:34:51Zdoi:10.1051/epjconf/202024501039http://cds.cern.ch/record/2754094engPetrucci, StefanoMatev, RosenAaij, RoelScalable monitoring data processing for the LHCb software triggerComputing and ComputersDetectors and Experimental TechniquesThe LHCb High Level Trigger (HLT) is split in two stages. HLT1 is synchronous with collisions delivered by the LHC and writes its output to a local disk buffer, which is asynchronously processed by HLT2. Efficient monitoring of the data being processed by the application is crucial to promptly diagnose detector or software problems. HLT2 consists of approximately 50000 processes and 4000 histograms are produced by each process. This results in 200 million histograms that need to be aggregated for each of up to a hundred data taking intervals that are being processed simultaneously. This paper presents the multi-level hierarchical architecture of the monitoring infrastructure put in place to achieve this. Network bandwidth is minimised by sending histogram increments and only exchanging metadata when necessary, using a custom lightweight protocol based on boost::serialize. The transport layer is implemented with ZeroMQ, which supports IPC and TCP communication, queue handling, asynchronous request/response and multipart messages. The persistent storage to ROOT is parallelized in order to cope with data arriving from a hundred of data taking intervals being processed simultaneously by HLT2. The performance and the scalability of the current system are presented. We demonstrate the feasibility of such an approach for the HLT1 use case, where real-time feedback and reliability of the infrastructure are crucial. In addition, a prototype of a high-level transport layer based on the stream-processing platform Apache Kafka is shown, which has several advantages over the lower-level ZeroMQ solution.oai:inspirehep.net:18320222020
spellingShingle Computing and Computers
Detectors and Experimental Techniques
Petrucci, Stefano
Matev, Rosen
Aaij, Roel
Scalable monitoring data processing for the LHCb software trigger
title Scalable monitoring data processing for the LHCb software trigger
title_full Scalable monitoring data processing for the LHCb software trigger
title_fullStr Scalable monitoring data processing for the LHCb software trigger
title_full_unstemmed Scalable monitoring data processing for the LHCb software trigger
title_short Scalable monitoring data processing for the LHCb software trigger
title_sort scalable monitoring data processing for the lhcb software trigger
topic Computing and Computers
Detectors and Experimental Techniques
url https://dx.doi.org/10.1051/epjconf/202024501039
http://cds.cern.ch/record/2754094
work_keys_str_mv AT petruccistefano scalablemonitoringdataprocessingforthelhcbsoftwaretrigger
AT matevrosen scalablemonitoringdataprocessingforthelhcbsoftwaretrigger
AT aaijroel scalablemonitoringdataprocessingforthelhcbsoftwaretrigger