Cargando…
Operating the ATLAS Data-Flow System with the First LHC Collisions
In this paper we will report on the operation and the performance of the ATLAS data-flow system during the 2010 physics run of the Large Hadron Collider (LHC) at 7 TeV. The data-flow system is responsible for reading out, formatting and conveying the event data, eventually saving the selected events...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2010
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/1300521 |
Sumario: | In this paper we will report on the operation and the performance of the ATLAS data-flow system during the 2010 physics run of the Large Hadron Collider (LHC) at 7 TeV. The data-flow system is responsible for reading out, formatting and conveying the event data, eventually saving the selected events into the mass storage. By the second quarter of 2010, for the first time, the system will be capable of the full event building capacity and improved data-logging throughput. We will in particular detail the tools put in place to predict and track the system working point, with the aim of optimizing the bandwidth and the computing resource sharing, and anticipate possible limits. Naturally, the LHC duty cycle, the trigger performance, and the detector configuration influence the system working point. Therefore, numerical studies of the data-flow system capabilities have been performed considering different scenarios. This is crucial for the first phase of the LHC operations where variable running conditions are anticipated due to the ongoing trigger commissioning and the detector and physics performance studies. The exploitation of these results requires to know and track the system working point, as defined by a set of many different operational parameters, e.g. rates, throughput, event size. Dedicated tools fulfill this mandate, providing integrated storage and visualization of the data-flow and network operational parameters. |
---|