Cargando…

Performance of the new DAQ system of the CMS experiment for run-2

The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of more than 100GB/s to the Highlevel Trigger (HLT) farm. The HLT farm selects and classifies interesting events f...

Descripción completa

Detalles Bibliográficos
Autores principales: André, Jean-Marc, Andronidis, Anastasios, Behrens, Ulf, Branson, James, Brummer, Philipp, Chaze, Olivier, Contescu, Cristian, Craigs, Benjamin G, Cittolin, Sergio, Darlea, Georgiana-Lavinia, Deldicque, Christian, Demiragli, Zeynep, Dobson, Marc, Erhan, Samim, Fulcher, Jonathan Richard, Gigi, Dominique, Glege, Frank, Gomez-Ceballos, Guillelmo, Hegeman, Jeroen, Holzner, André, Jiménez-Estupiañán, Raúl, Masetti, Lorenzo, Meijers, Frans, Meschi, Emilio, Mommsen, Remigius K, Morovic, Srečko, O'Dell, Vivian, Orsini, Luciano, Paus, Christoph, Pieri, Marco, Racz, Attila, Sakulin, Hannes, Schwick, Christoph, Reis, Thomas, Simelevičius, Dainius, Zejdl, Petr
Lenguaje:eng
Publicado: 2016
Materias:
Acceso en línea:https://dx.doi.org/10.1109/RTC.2016.7543164
http://cds.cern.ch/record/2264424
Descripción
Sumario:The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of more than 100GB/s to the Highlevel Trigger (HLT) farm. The HLT farm selects and classifies interesting events for storage and offline analysis at an output rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013-2014. The motivation for this upgrade was twofold. Firstly, the compute nodes, networking and storage infrastructure were reaching the end of their lifetimes. Secondly, in order to maintain physics performance with higher LHC luminosities and increasing event pileup, a number of sub-detectors are being upgraded, increasing the number of readout channels as well as the required throughput, and replacing the off-detector readout electronics with a MicroTCA-based DAQ interface. The new DAQ architecture takes advantage of the latest developments in the computing industry. For data concentration 10/40 Gbit/s Ethernet technologies are used, and a 56Gbit/s Infiniband FDR CLOS network (total throughput ≈ 4Tbit/s) has been chosen for the event builder. The upgraded DAQ - HLT interface is entirely file-based, essentially decoupling the DAQ and HLT systems. The fully-built events are transported to the HLT over 10/40 Gbit/s Ethernet via a network file system. The collection of events accepted by the HLT and the corresponding metadata are buffered on a global file system before being transferred off-site. The monitoring of the HLT farm and the data-taking performance is based on the Elasticsearch analytics tool. This paper presents the requirements, implementation, and performance of the system. Experience is reported on the first year of operation with LHC proton-proton runs as well as with the heavy ion lead-lead runs in 2015.