Cargando…
The ATLAS Data Flow System for LHC Run II
After its first shutdown, the LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The Data Flow (DF) element of the TDAQ is a distributed hardware and so...
Autores principales: | , |
---|---|
Lenguaje: | eng |
Publicado: |
2015
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/2112127 |
_version_ | 1780948916878966784 |
---|---|
author | Kazarov, Andrei ATLAS Collaboration |
author_facet | Kazarov, Andrei ATLAS Collaboration |
author_sort | Kazarov, Andrei |
collection | CERN |
description | After its first shutdown, the LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The Data Flow (DF) element of the TDAQ is a distributed hardware and software system responsible for buffering and transporting event data from the readout system to the High Level Trigger (HLT) and to the event storage. The DF has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The updated DF is radically different from the previous implementation both in terms of architecture and expected performance. The pre-existing two level software filtering, known as L2 and the Event Filter, and the Event Building are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architecture, the flexible and automatically balanced distribution of the computing resources, the sharing of code and services on nodes. In addition, logical farm slicing, with each slice managed by a dedicated supervisor, has been dropped in favour of global management by a single farm master operating at 100 kHz. The Data Collection network, that connects the HLT processing nodes to the Readout and the storage systems has evolved to provide network connectivity as required by the new Data Flow architecture. The old Data Collection and Back-End networks have been merged into a single Ethernet network and the Readout PCs have been directly connected to the network cores. The aggregate throughput and port density have been increased by an order of magnitude and the introduction of Multi Chassis Trunking significantly enhanced fault tolerance and redundancy. We will discuss the design choices, the strategies employed to minimize the data-collection latency, architecture and implementation aspects of DF components. |
id | cern-2112127 |
institution | Organización Europea para la Investigación Nuclear |
language | eng |
publishDate | 2015 |
record_format | invenio |
spelling | cern-21121272021-09-14T11:42:02Zhttp://cds.cern.ch/record/2112127engKazarov, AndreiATLAS CollaborationThe ATLAS Data Flow System for LHC Run IIParticle Physics - ExperimentAfter its first shutdown, the LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The Data Flow (DF) element of the TDAQ is a distributed hardware and software system responsible for buffering and transporting event data from the readout system to the High Level Trigger (HLT) and to the event storage. The DF has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The updated DF is radically different from the previous implementation both in terms of architecture and expected performance. The pre-existing two level software filtering, known as L2 and the Event Filter, and the Event Building are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architecture, the flexible and automatically balanced distribution of the computing resources, the sharing of code and services on nodes. In addition, logical farm slicing, with each slice managed by a dedicated supervisor, has been dropped in favour of global management by a single farm master operating at 100 kHz. The Data Collection network, that connects the HLT processing nodes to the Readout and the storage systems has evolved to provide network connectivity as required by the new Data Flow architecture. The old Data Collection and Back-End networks have been merged into a single Ethernet network and the Readout PCs have been directly connected to the network cores. The aggregate throughput and port density have been increased by an order of magnitude and the introduction of Multi Chassis Trunking significantly enhanced fault tolerance and redundancy. We will discuss the design choices, the strategies employed to minimize the data-collection latency, architecture and implementation aspects of DF components.ATL-DAQ-PROC-2015-064oai:cds.cern.ch:21121272015-12-09 |
spellingShingle | Particle Physics - Experiment Kazarov, Andrei ATLAS Collaboration The ATLAS Data Flow System for LHC Run II |
title | The ATLAS Data Flow System for LHC Run II |
title_full | The ATLAS Data Flow System for LHC Run II |
title_fullStr | The ATLAS Data Flow System for LHC Run II |
title_full_unstemmed | The ATLAS Data Flow System for LHC Run II |
title_short | The ATLAS Data Flow System for LHC Run II |
title_sort | atlas data flow system for lhc run ii |
topic | Particle Physics - Experiment |
url | http://cds.cern.ch/record/2112127 |
work_keys_str_mv | AT kazarovandrei theatlasdataflowsystemforlhcrunii AT atlascollaboration theatlasdataflowsystemforlhcrunii AT kazarovandrei atlasdataflowsystemforlhcrunii AT atlascollaboration atlasdataflowsystemforlhcrunii |