Cargando…
ATLAS Distributed Computing Operations in the First Two Years of Data Taking
The ATLAS experiment has had two years of steady data taking in 2010 and 2011. Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid. Following the experience in 2010, the data distribution policies were revised to address sc...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2012
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/1432481 |
Sumario: | The ATLAS experiment has had two years of steady data taking in 2010 and 2011. Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid. Following the experience in 2010, the data distribution policies were revised to address scalability issues due to the increase in luminosity and trigger rate in 2011. The structure in the ATLAS computing model has also been revised to optimise the usage of the resources, according to effective transfer rates between sites and site availability. Some new infrastructures were introduced for the software installation at the sites and for database access to reduce the bottlenecks in the data processing. Issues in the end-user analysis were studied and automated control system of the analysis queues based on functional tests has been introduced. The monitoring tools have been implemented and improved to review the ATLAS activities by categories. In this talk, we will report on the operational experience and evolution in the ATLAS Distributed Computing and on the system performance during the first two years of operation. |
---|