Cargando…
Diagnostic Systems and Resources utilization of the ATLAS High Level Trigger
Since the LHC started colliding protons in December 2009, the ATLAS trigger has operated very successfully with a collision rate which has increased by several orders of magnitude. The trigger monitoring and data quality infrastructure was essential to this success. We describe the software tools us...
Autores principales: | , , |
---|---|
Lenguaje: | eng |
Publicado: |
2010
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/1304577 |
_version_ | 1780921129298296832 |
---|---|
author | zur Nedden, M Sidoti, A Ospanov, R |
author_facet | zur Nedden, M Sidoti, A Ospanov, R |
author_sort | zur Nedden, M |
collection | CERN |
description | Since the LHC started colliding protons in December 2009, the ATLAS trigger has operated very successfully with a collision rate which has increased by several orders of magnitude. The trigger monitoring and data quality infrastructure was essential to this success. We describe the software tools used to monitor the trigger system performance and assess the overall quality of the trigger selection during collisions running. ATLAS has broad physics goals which require a large number of different active triggers due to complex event topology, requiring quite sophisticated software structures and concepts. The trigger of the ATLAS experiment is built as a three level system. The first level is realized in hardware while the high level triggers (HLT) are software based and run on large PC farms. The trigger reduces the bunch crossing rate of 40 MHz, at design, to an average event rate of about 200 Hz for storage. Since the ATLAS detector is a general purpose detector, the trigger must be sensitive to a large number of different final state processes. The vast majority of the interesting physics processes produce trigger topologies with one or more high transverse energy (ET ) particles of various types such as leptons, photons and miss quarks (jets) or events with high missing transverse energy (ET ) from undetected particles. For each of these types, one or more trigger signature is defined: µ, , e, , jet, b-jet and miss ET . The definition of a trigger signature might include isolation criteria. Trigger decisions are mainly based on combinations of trigger signatures with transverse energy or momentum above various thresholds which are identified in the trigger processing. The core HLT software which handles the successive running of algorithms and controls the information flow during the HLT decision process is called the Trigger Steering. The Trigger Steering is configured by a Trigger Menu which consists mainly of a collection of trigger chains. The trigger chains define the sequence of steps taken to derive the trigger decision starting from a particular input trigger item. An event can be discarded at any step of the trigger chain, thus avoiding the execution of the subsequent steps. The HLT algorithms search for those trigger signatures. At level-2, custom algorithms request a small fraction of the full detector information, corresponding to regions identified by level-1. At the next level (Event Filter) algorithms based on offline reconstruction have access to the full detector data. With respect to the complexity of the ATLAS data acquisition and trigger system, a reliable and redundant diagnostic and monitoring system is inevitable for a successful commissioning and stable running of the whole experiment. The HLT must process events within the limits imposed by the available computing power, network bandwidth, and storage space. The main aspe cts of the performance monitoring are: trigger rates at each level, distributions reflecting the quality of the trigger signatures and system performance indicators that reflect the trigger behavior during the data taking. This information must be provided to the shift- crew and trigger experts in real-time in a convenient format so that they can react promptly to changing conditions of the LHC or of the ATLAS detector. In addition, it is vital to record information on the quality of trigger data for use in subsequent physics analysis. Additionally, the offline verification of the quality of trigger reconstructed objects is essential for all physics analysis. Detailed information about HLT resource utilization is collected using a dedicated monitoring tool. The tool measures processing time for each HLT trigger algorithm, including network latency and access patterns for reading data from individual sub-detectors and provides this and more information to the Trigger Steering process. The information is also collected for events that are rejected at a later stage and hence inaccessible for offline analysis, which is crucial for understanding HLT execution costs (rate × execution time for a certain trigger algorithm). They are used to extrapolate the HLT bandwidth and computing requirements to higher LHC luminosities. With cosmics muon events, single LHC beams and proton-proton collision data, the relia- bility and the sooth ru n ning of the ATLAS trigger and data acquisition system could be demonstrated. All the monitoring systems implemented so far have satisfactory functional- ities and deliver the necessary support to run the ATLAS trigger system. A tool for fast trigger checks in the control room has been implemented and tested. The development phase has concluded, but with the first data in 2010 many new challenges have to be faced for the trigger monitoring. In order to assess the trigger data quality reliable, the behavior of the system has to be understood better with real data. As a consequence the parameters of the framework, like the thresholds of the Data Quality tests and the reference histograms, have to be optimized making use of the first data. This requires an intensive cooperation within the ATLAS physics analysis groups. The offline diagnostic tools, based on the monitoring tools to be processed during the standard reconstruction or even more during the first re- construction using the express stream will also be extended with the experience of the first real collision data. The trigger operation focus is now shifting from development and commissioning to a con- stant monitoring of the system, the optimization and continuous adjustment of its parame- ters, as well as assessment of its performance. |
id | cern-1304577 |
institution | Organización Europea para la Investigación Nuclear |
language | eng |
publishDate | 2010 |
record_format | invenio |
spelling | cern-13045772019-09-30T06:29:59Zhttp://cds.cern.ch/record/1304577engzur Nedden, MSidoti, AOspanov, RDiagnostic Systems and Resources utilization of the ATLAS High Level TriggerDetectors and Experimental TechniquesSince the LHC started colliding protons in December 2009, the ATLAS trigger has operated very successfully with a collision rate which has increased by several orders of magnitude. The trigger monitoring and data quality infrastructure was essential to this success. We describe the software tools used to monitor the trigger system performance and assess the overall quality of the trigger selection during collisions running. ATLAS has broad physics goals which require a large number of different active triggers due to complex event topology, requiring quite sophisticated software structures and concepts. The trigger of the ATLAS experiment is built as a three level system. The first level is realized in hardware while the high level triggers (HLT) are software based and run on large PC farms. The trigger reduces the bunch crossing rate of 40 MHz, at design, to an average event rate of about 200 Hz for storage. Since the ATLAS detector is a general purpose detector, the trigger must be sensitive to a large number of different final state processes. The vast majority of the interesting physics processes produce trigger topologies with one or more high transverse energy (ET ) particles of various types such as leptons, photons and miss quarks (jets) or events with high missing transverse energy (ET ) from undetected particles. For each of these types, one or more trigger signature is defined: µ, , e, , jet, b-jet and miss ET . The definition of a trigger signature might include isolation criteria. Trigger decisions are mainly based on combinations of trigger signatures with transverse energy or momentum above various thresholds which are identified in the trigger processing. The core HLT software which handles the successive running of algorithms and controls the information flow during the HLT decision process is called the Trigger Steering. The Trigger Steering is configured by a Trigger Menu which consists mainly of a collection of trigger chains. The trigger chains define the sequence of steps taken to derive the trigger decision starting from a particular input trigger item. An event can be discarded at any step of the trigger chain, thus avoiding the execution of the subsequent steps. The HLT algorithms search for those trigger signatures. At level-2, custom algorithms request a small fraction of the full detector information, corresponding to regions identified by level-1. At the next level (Event Filter) algorithms based on offline reconstruction have access to the full detector data. With respect to the complexity of the ATLAS data acquisition and trigger system, a reliable and redundant diagnostic and monitoring system is inevitable for a successful commissioning and stable running of the whole experiment. The HLT must process events within the limits imposed by the available computing power, network bandwidth, and storage space. The main aspe cts of the performance monitoring are: trigger rates at each level, distributions reflecting the quality of the trigger signatures and system performance indicators that reflect the trigger behavior during the data taking. This information must be provided to the shift- crew and trigger experts in real-time in a convenient format so that they can react promptly to changing conditions of the LHC or of the ATLAS detector. In addition, it is vital to record information on the quality of trigger data for use in subsequent physics analysis. Additionally, the offline verification of the quality of trigger reconstructed objects is essential for all physics analysis. Detailed information about HLT resource utilization is collected using a dedicated monitoring tool. The tool measures processing time for each HLT trigger algorithm, including network latency and access patterns for reading data from individual sub-detectors and provides this and more information to the Trigger Steering process. The information is also collected for events that are rejected at a later stage and hence inaccessible for offline analysis, which is crucial for understanding HLT execution costs (rate × execution time for a certain trigger algorithm). They are used to extrapolate the HLT bandwidth and computing requirements to higher LHC luminosities. With cosmics muon events, single LHC beams and proton-proton collision data, the relia- bility and the sooth ru n ning of the ATLAS trigger and data acquisition system could be demonstrated. All the monitoring systems implemented so far have satisfactory functional- ities and deliver the necessary support to run the ATLAS trigger system. A tool for fast trigger checks in the control room has been implemented and tested. The development phase has concluded, but with the first data in 2010 many new challenges have to be faced for the trigger monitoring. In order to assess the trigger data quality reliable, the behavior of the system has to be understood better with real data. As a consequence the parameters of the framework, like the thresholds of the Data Quality tests and the reference histograms, have to be optimized making use of the first data. This requires an intensive cooperation within the ATLAS physics analysis groups. The offline diagnostic tools, based on the monitoring tools to be processed during the standard reconstruction or even more during the first re- construction using the express stream will also be extended with the experience of the first real collision data. The trigger operation focus is now shifting from development and commissioning to a con- stant monitoring of the system, the optimization and continuous adjustment of its parame- ters, as well as assessment of its performance.ATL-DAQ-SLIDE-2010-475oai:cds.cern.ch:13045772010-11-03 |
spellingShingle | Detectors and Experimental Techniques zur Nedden, M Sidoti, A Ospanov, R Diagnostic Systems and Resources utilization of the ATLAS High Level Trigger |
title | Diagnostic Systems and Resources utilization of the ATLAS High Level Trigger |
title_full | Diagnostic Systems and Resources utilization of the ATLAS High Level Trigger |
title_fullStr | Diagnostic Systems and Resources utilization of the ATLAS High Level Trigger |
title_full_unstemmed | Diagnostic Systems and Resources utilization of the ATLAS High Level Trigger |
title_short | Diagnostic Systems and Resources utilization of the ATLAS High Level Trigger |
title_sort | diagnostic systems and resources utilization of the atlas high level trigger |
topic | Detectors and Experimental Techniques |
url | http://cds.cern.ch/record/1304577 |
work_keys_str_mv | AT zurneddenm diagnosticsystemsandresourcesutilizationoftheatlashighleveltrigger AT sidotia diagnosticsystemsandresourcesutilizationoftheatlashighleveltrigger AT ospanovr diagnosticsystemsandresourcesutilizationoftheatlashighleveltrigger |