Cargando…

The ATLAS Data Flow system in Run2: Design and Performance

The ATLAS detector uses a real time selective triggering system to reduce the high interaction rate from 40 MHz to its data storage capacity of 1 kHz. A hardware first level trigger limits the rate to 100 kHz and a software high level trigger selects events for offline analysis. By building on the e...

Descripción completa

Detalles Bibliográficos
Autor principal: Rifki, Othmane
Lenguaje:eng
Publicado: 2016
Materias:
Acceso en línea:http://cds.cern.ch/record/2209354
_version_ 1780951764933017600
author Rifki, Othmane
author_facet Rifki, Othmane
author_sort Rifki, Othmane
collection CERN
description The ATLAS detector uses a real time selective triggering system to reduce the high interaction rate from 40 MHz to its data storage capacity of 1 kHz. A hardware first level trigger limits the rate to 100 kHz and a software high level trigger selects events for offline analysis. By building on the experience gained during the successful first run of the LHC, the ATLAS Trigger and Data Acquisition system has been simplified and upgraded to take advantage of state of the art technologies. The Dataflow element of the system is composed of distributed hardware and software responsible for buffering and transporting event data from the Readout system to the High Level Trigger and to the event storage. This system has been reshaped in order to maximize the flexibility and efficiency of the data selection process. The updated dataflow is different from the previous implementation both in terms of architecture and performance. The biggest difference is within the high level trigger, where the merger of region-of-interest based selection with event building and filtering into a single process allows incremental data collection and analysis. The previous structure of the commodity server farm running the high level trigger algorithms, previously subdivided with each slice managed by a dedicated supervisor, has been dropped in favor of global management by a single farm master operating at 100 kHz referred to as the high level trigger supervisor. The Region of Interest Builder previously implemented on a VMEbus system is now integrated with this supervisor, with the region of interest building done in software. The Data Collection network that connects the high level trigger processing nodes to the Readout and the storage systems has evolved to a single Ethernet network and the Readout PCs have been directly connected to it. The aggregate throughput and port density have been increased by an order of magnitude with the introduction of advanced network routing and significantly enhanced fault tolerance and redundancy. The overall design of the system will be presented, along with performance results from the start up phase of LHC Run-2.
id cern-2209354
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2016
record_format invenio
spelling cern-22093542019-09-30T06:29:59Zhttp://cds.cern.ch/record/2209354engRifki, OthmaneThe ATLAS Data Flow system in Run2: Design and PerformanceParticle Physics - ExperimentThe ATLAS detector uses a real time selective triggering system to reduce the high interaction rate from 40 MHz to its data storage capacity of 1 kHz. A hardware first level trigger limits the rate to 100 kHz and a software high level trigger selects events for offline analysis. By building on the experience gained during the successful first run of the LHC, the ATLAS Trigger and Data Acquisition system has been simplified and upgraded to take advantage of state of the art technologies. The Dataflow element of the system is composed of distributed hardware and software responsible for buffering and transporting event data from the Readout system to the High Level Trigger and to the event storage. This system has been reshaped in order to maximize the flexibility and efficiency of the data selection process. The updated dataflow is different from the previous implementation both in terms of architecture and performance. The biggest difference is within the high level trigger, where the merger of region-of-interest based selection with event building and filtering into a single process allows incremental data collection and analysis. The previous structure of the commodity server farm running the high level trigger algorithms, previously subdivided with each slice managed by a dedicated supervisor, has been dropped in favor of global management by a single farm master operating at 100 kHz referred to as the high level trigger supervisor. The Region of Interest Builder previously implemented on a VMEbus system is now integrated with this supervisor, with the region of interest building done in software. The Data Collection network that connects the high level trigger processing nodes to the Readout and the storage systems has evolved to a single Ethernet network and the Readout PCs have been directly connected to it. The aggregate throughput and port density have been increased by an order of magnitude with the introduction of advanced network routing and significantly enhanced fault tolerance and redundancy. The overall design of the system will be presented, along with performance results from the start up phase of LHC Run-2.ATL-DAQ-SLIDE-2016-508oai:cds.cern.ch:22093542016-08-22
spellingShingle Particle Physics - Experiment
Rifki, Othmane
The ATLAS Data Flow system in Run2: Design and Performance
title The ATLAS Data Flow system in Run2: Design and Performance
title_full The ATLAS Data Flow system in Run2: Design and Performance
title_fullStr The ATLAS Data Flow system in Run2: Design and Performance
title_full_unstemmed The ATLAS Data Flow system in Run2: Design and Performance
title_short The ATLAS Data Flow system in Run2: Design and Performance
title_sort atlas data flow system in run2: design and performance
topic Particle Physics - Experiment
url http://cds.cern.ch/record/2209354
work_keys_str_mv AT rifkiothmane theatlasdataflowsysteminrun2designandperformance
AT rifkiothmane atlasdataflowsysteminrun2designandperformance