Cargando…
Computer modeling the ATLAS Trigger/DAQ system performance
In this paper simulation ("computer modeling") of the Trigger/DAQ system of the ATLAS experiment at the LHC accelerator is discussed. The system will consist of a few thousand end-nodes, which are interconnected by a large Local Area Network. The nodes will run various applications under t...
Autores principales: | , , , , , |
---|---|
Lenguaje: | eng |
Publicado: |
2003
|
Materias: | |
Acceso en línea: | https://dx.doi.org/10.1109/TNS.2004.828825 http://cds.cern.ch/record/681368 |
Sumario: | In this paper simulation ("computer modeling") of the Trigger/DAQ system of the ATLAS experiment at the LHC accelerator is discussed. The system will consist of a few thousand end-nodes, which are interconnected by a large Local Area Network. The nodes will run various applications under the Linux OS. The purpose of computer modeling is to verify the rate handling capability of the system designed and to find potential problem areas. The models of the system components are kept as simple as possible but are sufficiently detailed to reproduce behavioral aspects relevant to the issues studied. Values of the model parameters have been determined using small dedicated setups. This calibration phase has been followed by a validation process. More complex setups have been wired-up and relevant measurement results were obtained. These setups were also modeled and the results were compared to the measurement results. Discrepancies were leading to modification and extension of the set of parameters. After gaining confidence in this way in the system component models a model of the full size ATLAS system was run. Predictions for the latency, throughput and queue development in various places have been obtained. The queue development is extremely important, as packet loss may cause severe performance degradation. We also tested various ideas on traffic shaping aimed at limiting probability of creating congestions in the network and possible packet loss. |
---|