Cargando…
The Resource utilization by ATLAS High Level Triggers. The contributed talk for the Technology and Instrumentation in Particle Physics 2011.
In 2010 the ATLAS experiment has successfully recorded data from LHC collisions with high efficiency and excellent data quality. ATLAS employs a three-level trigger system to select events of interest for physics analyses and detector commissioning. The trigger system consists of a custom-designed h...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2011
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/1356581 |
_version_ | 1780922491695267840 |
---|---|
author | Ospanov, R |
author_facet | Ospanov, R |
author_sort | Ospanov, R |
collection | CERN |
description | In 2010 the ATLAS experiment has successfully recorded data from LHC collisions with high efficiency and excellent data quality. ATLAS employs a three-level trigger system to select events of interest for physics analyses and detector commissioning. The trigger system consists of a custom-designed hardware trigger at level-1 (L1) and software algorithms executing on commodity servers at the two higher levels: second level trigger (L2) and event filter (EF). The corresponding trigger rates are 75~kHz, 3~kHz and 200~Hz. The L2 uses custom algorithms to examine a small fraction of data at full detector granularity in Regions of Interest selected by the L1. The EF employs offline algorithms and full detector data for more computationally intensive analysis. The trigger selection is defined by trigger menus which consist of more than 500 individual trigger signatures, such as electrons, muons, particle jets, etc. An execution of a trigger signature incurs computing and data storage costs. A composition of the deployed trigger menu depends on instantaneous LHC luminosity, experiment's goals for recorded data and limits imposed by the available computing power, network bandwidth and storage space. We have developed a monitoring infrastructure to assign computing cost for individual trigger signatures and the trigger menu as whole. These costs can be extrapolated to higher luminosity allowing development of trigger menus for a higher LHC collision rate than currently achievable. Total execution times of L2 and EF algorithms are monitored to ensure that sufficient computing resources are available to process events accepted by lower trigger levels. For events accepted by the L1, data fragments are buffered by the Readout System (ROS) which provides them on demand to the L2 algorithms. The rate and volume of these data requests by the individual L2 algorithms are also monitored, and the trigger menus are corrected when necessary, to prevent exceeding a maximum allowed ROS request rate. In addition, patterns for cabling readout links from sub-detectors front-ends are checked for potential inefficiencies which could limit ROS performance. Finally, an acceptance rate of individual signatures at higher luminosity is computed using specially recorded detector data. The acceptance rate of the entire menu is also computed taking into account correlations between signatures. In this presentation we describe the software infrastructure for measuring resource utilization of the ATLAS High Level Trigger. We also describe the procedure and tools employed by ATLAS in 2010 to develop trigger menus as the LHC collision rate increased by several orders of magnitude. |
id | cern-1356581 |
institution | Organización Europea para la Investigación Nuclear |
language | eng |
publishDate | 2011 |
record_format | invenio |
spelling | cern-13565812019-09-30T06:29:59Zhttp://cds.cern.ch/record/1356581engOspanov, RThe Resource utilization by ATLAS High Level Triggers. The contributed talk for the Technology and Instrumentation in Particle Physics 2011.Detectors and Experimental TechniquesIn 2010 the ATLAS experiment has successfully recorded data from LHC collisions with high efficiency and excellent data quality. ATLAS employs a three-level trigger system to select events of interest for physics analyses and detector commissioning. The trigger system consists of a custom-designed hardware trigger at level-1 (L1) and software algorithms executing on commodity servers at the two higher levels: second level trigger (L2) and event filter (EF). The corresponding trigger rates are 75~kHz, 3~kHz and 200~Hz. The L2 uses custom algorithms to examine a small fraction of data at full detector granularity in Regions of Interest selected by the L1. The EF employs offline algorithms and full detector data for more computationally intensive analysis. The trigger selection is defined by trigger menus which consist of more than 500 individual trigger signatures, such as electrons, muons, particle jets, etc. An execution of a trigger signature incurs computing and data storage costs. A composition of the deployed trigger menu depends on instantaneous LHC luminosity, experiment's goals for recorded data and limits imposed by the available computing power, network bandwidth and storage space. We have developed a monitoring infrastructure to assign computing cost for individual trigger signatures and the trigger menu as whole. These costs can be extrapolated to higher luminosity allowing development of trigger menus for a higher LHC collision rate than currently achievable. Total execution times of L2 and EF algorithms are monitored to ensure that sufficient computing resources are available to process events accepted by lower trigger levels. For events accepted by the L1, data fragments are buffered by the Readout System (ROS) which provides them on demand to the L2 algorithms. The rate and volume of these data requests by the individual L2 algorithms are also monitored, and the trigger menus are corrected when necessary, to prevent exceeding a maximum allowed ROS request rate. In addition, patterns for cabling readout links from sub-detectors front-ends are checked for potential inefficiencies which could limit ROS performance. Finally, an acceptance rate of individual signatures at higher luminosity is computed using specially recorded detector data. The acceptance rate of the entire menu is also computed taking into account correlations between signatures. In this presentation we describe the software infrastructure for measuring resource utilization of the ATLAS High Level Trigger. We also describe the procedure and tools employed by ATLAS in 2010 to develop trigger menus as the LHC collision rate increased by several orders of magnitude.ATL-DAQ-SLIDE-2011-237oai:cds.cern.ch:13565812011-06-06 |
spellingShingle | Detectors and Experimental Techniques Ospanov, R The Resource utilization by ATLAS High Level Triggers. The contributed talk for the Technology and Instrumentation in Particle Physics 2011. |
title | The Resource utilization by ATLAS High Level Triggers. The contributed talk for the Technology and Instrumentation in Particle Physics 2011. |
title_full | The Resource utilization by ATLAS High Level Triggers. The contributed talk for the Technology and Instrumentation in Particle Physics 2011. |
title_fullStr | The Resource utilization by ATLAS High Level Triggers. The contributed talk for the Technology and Instrumentation in Particle Physics 2011. |
title_full_unstemmed | The Resource utilization by ATLAS High Level Triggers. The contributed talk for the Technology and Instrumentation in Particle Physics 2011. |
title_short | The Resource utilization by ATLAS High Level Triggers. The contributed talk for the Technology and Instrumentation in Particle Physics 2011. |
title_sort | resource utilization by atlas high level triggers. the contributed talk for the technology and instrumentation in particle physics 2011. |
topic | Detectors and Experimental Techniques |
url | http://cds.cern.ch/record/1356581 |
work_keys_str_mv | AT ospanovr theresourceutilizationbyatlashighleveltriggersthecontributedtalkforthetechnologyandinstrumentationinparticlephysics2011 AT ospanovr resourceutilizationbyatlashighleveltriggersthecontributedtalkforthetechnologyandinstrumentationinparticlephysics2011 |