Cargando…
The ATLAS Data Flow system in Run2: Design and Performance
The ATLAS detector uses a real time selective triggering system to reduce the high interaction rate from 40 MHz to its data storage capacity of 1 kHz. A hardware first level trigger limits the rate to 100 kHz and a software high level trigger selects events for offline analysis. By building on the e...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2016
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/2209354 |
Sumario: | The ATLAS detector uses a real time selective triggering system to reduce the high interaction rate from 40 MHz to its data storage capacity of 1 kHz. A hardware first level trigger limits the rate to 100 kHz and a software high level trigger selects events for offline analysis. By building on the experience gained during the successful first run of the LHC, the ATLAS Trigger and Data Acquisition system has been simplified and upgraded to take advantage of state of the art technologies. The Dataflow element of the system is composed of distributed hardware and software responsible for buffering and transporting event data from the Readout system to the High Level Trigger and to the event storage. This system has been reshaped in order to maximize the flexibility and efficiency of the data selection process. The updated dataflow is different from the previous implementation both in terms of architecture and performance. The biggest difference is within the high level trigger, where the merger of region-of-interest based selection with event building and filtering into a single process allows incremental data collection and analysis. The previous structure of the commodity server farm running the high level trigger algorithms, previously subdivided with each slice managed by a dedicated supervisor, has been dropped in favor of global management by a single farm master operating at 100 kHz referred to as the high level trigger supervisor. The Region of Interest Builder previously implemented on a VMEbus system is now integrated with this supervisor, with the region of interest building done in software. The Data Collection network that connects the high level trigger processing nodes to the Readout and the storage systems has evolved to a single Ethernet network and the Readout PCs have been directly connected to it. The aggregate throughput and port density have been increased by an order of magnitude with the introduction of advanced network routing and significantly enhanced fault tolerance and redundancy. The overall design of the system will be presented, along with performance results from the start up phase of LHC Run-2. |
---|