Cargando…

The ATLAS High Level Trigger Infrastructure, Performance and Future Developments

The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage event filter running on a farm of commodity PC hardware. Current...

Descripción completa

Detalles Bibliográficos
Autor principal: The ATLAS collaboration
Lenguaje:eng
Publicado: 2009
Materias:
Acceso en línea:http://cds.cern.ch/record/1175213
_version_ 1780916227118465024
author The ATLAS collaboration
author_facet The ATLAS collaboration
author_sort The ATLAS collaboration
collection CERN
description The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage event filter running on a farm of commodity PC hardware. Currently the system consists of about 850 multi-core processing nodes that will be extended incrementally following the increasing luminosity of the LHC to about 2000 nodes depending on the evolution of the processor technology. Due to the complexity and similarity of the algorithms a large fraction of the software is shared between the online and offline event reconstruction. The HLT Infrastructure serves as the interface between the two domains and provides common services for the trigger algorithms. The consequences of this design choice will be discussed and experiences from the operation of the ATLAS HLT during cosmic ray data taking and first beam in 2008 will be presented. Since the event processing time at the HLT is directly related to the number of processing nodes required, special emphasis has to be put on monitoring and improving the performance of the software. Both open-source as well as custom developed tools are used for this task and a few use-cases will be shown. Finally, the implications of the prevailing industry trend towards multi- and many-core processors for the architecture of the ATLAS HLT will be discussed. The use of multi-processing and multi-threading techniques within the current system will be presented. Several approaches to make optimal use of the increasing number of cores will be investigated and the practical implications of implementing each approach in the current system with hundreds of developers and several hundred thousand lines of code will be examined.
id cern-1175213
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2009
record_format invenio
spelling cern-11752132019-09-30T06:29:59Zhttp://cds.cern.ch/record/1175213engThe ATLAS collaborationThe ATLAS High Level Trigger Infrastructure, Performance and Future DevelopmentsDetectors and Experimental TechniquesThe ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage event filter running on a farm of commodity PC hardware. Currently the system consists of about 850 multi-core processing nodes that will be extended incrementally following the increasing luminosity of the LHC to about 2000 nodes depending on the evolution of the processor technology. Due to the complexity and similarity of the algorithms a large fraction of the software is shared between the online and offline event reconstruction. The HLT Infrastructure serves as the interface between the two domains and provides common services for the trigger algorithms. The consequences of this design choice will be discussed and experiences from the operation of the ATLAS HLT during cosmic ray data taking and first beam in 2008 will be presented. Since the event processing time at the HLT is directly related to the number of processing nodes required, special emphasis has to be put on monitoring and improving the performance of the software. Both open-source as well as custom developed tools are used for this task and a few use-cases will be shown. Finally, the implications of the prevailing industry trend towards multi- and many-core processors for the architecture of the ATLAS HLT will be discussed. The use of multi-processing and multi-threading techniques within the current system will be presented. Several approaches to make optimal use of the increasing number of cores will be investigated and the practical implications of implementing each approach in the current system with hundreds of developers and several hundred thousand lines of code will be examined.ATL-DAQ-SLIDE-2009-106ATL-COM-DAQ-2009-035oai:cds.cern.ch:11752132009-05-04
spellingShingle Detectors and Experimental Techniques
The ATLAS collaboration
The ATLAS High Level Trigger Infrastructure, Performance and Future Developments
title The ATLAS High Level Trigger Infrastructure, Performance and Future Developments
title_full The ATLAS High Level Trigger Infrastructure, Performance and Future Developments
title_fullStr The ATLAS High Level Trigger Infrastructure, Performance and Future Developments
title_full_unstemmed The ATLAS High Level Trigger Infrastructure, Performance and Future Developments
title_short The ATLAS High Level Trigger Infrastructure, Performance and Future Developments
title_sort atlas high level trigger infrastructure, performance and future developments
topic Detectors and Experimental Techniques
url http://cds.cern.ch/record/1175213
work_keys_str_mv AT theatlascollaboration theatlashighleveltriggerinfrastructureperformanceandfuturedevelopments
AT theatlascollaboration atlashighleveltriggerinfrastructureperformanceandfuturedevelopments