Cargando…
Rate Predictions and Trigger/DAQ Resource Monitoring in ATLAS
Since starting in 2010, the Large Hadron Collider (LHC) has produced collisions at an ever increasing rate. The ATLAS experiment successfully records the collision data with high efficiency and excel- lent data quality. Events are selected using a three-level trigger system, where each level makes a...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2012
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/1432582 |
Sumario: | Since starting in 2010, the Large Hadron Collider (LHC) has produced collisions at an ever increasing rate. The ATLAS experiment successfully records the collision data with high efficiency and excel- lent data quality. Events are selected using a three-level trigger system, where each level makes a more refined selection. The level-1 trigger (L1) consists of a custom-designed hardware trigger which seeds two higher software based trigger levels. Over 300 triggers compose a trigger menu which selects physics signatures such as electrons, muons, particle jets, etc. Each trigger consumes computing resources of the ATLAS trigger system and offline storage. The LHC instantaneous luminosity conditions, desired physics goals of the collaboration, and the limits of the trigger infrastructure determine the composition of the ATLAS trigger menu. We describe a trigger monitoring frame- work for computing the costs of individual trigger algorithms such as data request rates and CPU consumption. This framework has been used to prepare the ATLAS trigger for data taking during increases of more than six orders of magnitude in the LHC luminosity and has been influential in guiding ATLAS Trigger computing upgrades. |
---|