Cargando…
The Resource utilization by ATLAS High Level Triggers. The contributed talk for the Technology and Instrumentation in Particle Physics 2011.
In 2010 the ATLAS experiment has successfully recorded data from LHC collisions with high efficiency and excellent data quality. ATLAS employs a three-level trigger system to select events of interest for physics analyses and detector commissioning. The trigger system consists of a custom-designed h...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2011
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/1356581 |
Sumario: | In 2010 the ATLAS experiment has successfully recorded data from LHC collisions with high efficiency and excellent data quality. ATLAS employs a three-level trigger system to select events of interest for physics analyses and detector commissioning. The trigger system consists of a custom-designed hardware trigger at level-1 (L1) and software algorithms executing on commodity servers at the two higher levels: second level trigger (L2) and event filter (EF). The corresponding trigger rates are 75~kHz, 3~kHz and 200~Hz. The L2 uses custom algorithms to examine a small fraction of data at full detector granularity in Regions of Interest selected by the L1. The EF employs offline algorithms and full detector data for more computationally intensive analysis. The trigger selection is defined by trigger menus which consist of more than 500 individual trigger signatures, such as electrons, muons, particle jets, etc. An execution of a trigger signature incurs computing and data storage costs. A composition of the deployed trigger menu depends on instantaneous LHC luminosity, experiment's goals for recorded data and limits imposed by the available computing power, network bandwidth and storage space. We have developed a monitoring infrastructure to assign computing cost for individual trigger signatures and the trigger menu as whole. These costs can be extrapolated to higher luminosity allowing development of trigger menus for a higher LHC collision rate than currently achievable. Total execution times of L2 and EF algorithms are monitored to ensure that sufficient computing resources are available to process events accepted by lower trigger levels. For events accepted by the L1, data fragments are buffered by the Readout System (ROS) which provides them on demand to the L2 algorithms. The rate and volume of these data requests by the individual L2 algorithms are also monitored, and the trigger menus are corrected when necessary, to prevent exceeding a maximum allowed ROS request rate. In addition, patterns for cabling readout links from sub-detectors front-ends are checked for potential inefficiencies which could limit ROS performance. Finally, an acceptance rate of individual signatures at higher luminosity is computed using specially recorded detector data. The acceptance rate of the entire menu is also computed taking into account correlations between signatures. In this presentation we describe the software infrastructure for measuring resource utilization of the ATLAS High Level Trigger. We also describe the procedure and tools employed by ATLAS in 2010 to develop trigger menus as the LHC collision rate increased by several orders of magnitude. |
---|