Cargando…
Real-time configuration changes of the ATLAS High Level Trigger
The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage trigger and event filter running on a farm of commodity PC hardw...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2010
|
Materias: | |
Acceso en línea: | https://dx.doi.org/10.1109/RTC.2010.5750407 http://cds.cern.ch/record/1271370 |
Sumario: | The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage trigger and event filter running on a farm of commodity PC hardware. Currently the system consists of about 850 processing nodes and will be extended incrementally following the expected increase in luminosity of the LHC to about 2000 nodes. The event selection within the HLT applications is carried out by specialized reconstruction algorithms. The selection can be controlled via properties that are stored in a central database and are retrieved at the startup of the HLT processes, which then usually run continuously for many hours. To be able to respond to changes in the LHC beam conditions, it is essential that the algorithms can be re-configured without disrupting data taking while ensuring a consistent and reproducible configuration across the entire HLT farm. The techniques developed to allow these real-time configuration changes will be exemplified on the basis of two applications: algorithm prescales and beamspot measurement. The prescale value determines the fraction of events an HLT algorithm is being executed on, including when it is deactivated. This feature is both essential during the commissioning phase of the HLT as well as for adjusting the mixture of recorded physics events durin g an LHC run. The primary event vertex distribution, from which the beam spot position and size can be extracted, is measured by a dedicated HLT algorithm on each node and periodically aggregated across the HLT farm and its parameters are published and stored in the conditions database. The result can be fed back to the HLT algorithms to maintain selection efficiency and rejections rates. Finally, we will briefly mention the technologies employed to allow the simultaneous database access of thousands of applications in an online environment. |
---|