Cargando…

ATLAS TDAQ System Administration: evolution and re-design

The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of $\sim 3000$ servers, processing the data readout from $\sim 100$ million detector...

Descripción completa

Detalles Bibliográficos
Autores principales: Ballestrero, Sergio, Bogdanchikov, Alexander, Brasolin, Franco, Contescu, Alexandru Cristian, Dubrov, Sergei, Fazio, Daniel, Korol, Aleksandr, Lee, Christopher Jon, Scannicchio, Diana, Twomey, Matthew Shaun
Lenguaje:eng
Publicado: 2015
Materias:
Acceso en línea:https://dx.doi.org/10.1088/1742-6596/664/8/082024
http://cds.cern.ch/record/2016420
Descripción
Sumario:The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of $\sim 3000$ servers, processing the data readout from $\sim 100$ million detector channels through multiple trigger levels. During the two years of the first Long Shutdown (LS1) there has been a tremendous amount of work done by the ATLAS TDAQ System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High Level Trigger farm with different purposes. The OS version has been upgraded to SLC6; for the largest part of the farm, which is composed by net booted nodes, this required a completely new design of the net booting system. In parallel, the migration to Puppet of the Configuration Management systems has been completed for both net booted and local booted hosts; the Post-Boot Scripts system and Quattor have been consequently dismissed. Virtual Machine~(VM) usage has been investigated and tested and many of our core servers are now running on VMs. Virtualisation has also been used to adapt the High Level Trigger farm as a batch system, which has been used for running Monte Carlo production jobs that are mostly CPU and not I/O bound. Finally, monitoring the health and the status of $\sim 3000$ machines in the experimental area is obviously of the utmost importance, so the obsolete Nagios v2 has been replaced with Icinga, complemented by Ganglia as a performance data provider. This paper serves for reporting "What", "Why" and "How" we did in order to improve and produce a system capable of performing for the next 3 years of ATLAS data taking.