Cargando…

A quasi-online distributed data processing on WAN: the ATLAS muon calibration system

In the Atlas experiment, the calibration of the precision tracking chambers of the muon detector is very demanding, since the rate of muon tracks required to get a complete calibration in homogeneous conditions and to feed prompt reconstruction with fresh constants is very high (several hundreds Hz...

Descripción completa

Detalles Bibliográficos
Autor principal: De Salvo, A
Lenguaje:eng
Publicado: 2013
Materias:
Acceso en línea:http://cds.cern.ch/record/1609560
_version_ 1780931964747907072
author De Salvo, A
author_facet De Salvo, A
author_sort De Salvo, A
collection CERN
description In the Atlas experiment, the calibration of the precision tracking chambers of the muon detector is very demanding, since the rate of muon tracks required to get a complete calibration in homogeneous conditions and to feed prompt reconstruction with fresh constants is very high (several hundreds Hz for 8-10 hours runs). The calculation of calibration constants is highly CPU consuming. In order to fulfill the requirement of completing the cycle and having the final constants available within 24 hours, distributed resources at Tier-2 centers have been allocated. The best place to get muon tracks suitable for detector calibration is the second level trigger, where the pre-selection of data sitting in a limited region by the first level trigger via the Region of Interest mechanism allows selecting all the hits from a single track in a limited region of the detector. Online data extraction allows calibration data collection without performing special runs. Small event pseudo-fragments (about 0.5 kB) built at the muon level-1 rate (2-3 kHz at the beginning of 2012 run, to become 10-12 kHz at maximum LHC luminosity) are then collected in parallel by a dedicated system, without affecting the main data taking, and sent to the Tier-0 computing center at CERN. The computing resources needed to calculate the calibration constants are distributed through three calibration centers (Rome, Munich, Ann Arbor) for the tracking device and one (Napoli) for the trigger chambers. From Tier-0, files are directly sent to the calibration centers through the ATLAS Data Distribution Manager. At the calibration centers, data is split per trigger tower and distributed to computing nodes for concurrent processing (~250 cores are currently used at each center). A two-stage processing is performed, the first stage reconstructing tracks and creating ntuples, the second one calculating constants. The calibration parameters are then stored in the local calibration database and replicated to the main condition database at CERN, which makes them available for data analysis within 24 hours from data extraction. The architecture and performance of this system during the 2011-2012 data taking will be presented. This system will evolve in the next future to comply with the new stringent requirements of the LHC and ATLAS upgrade. If for the WAN distribution part the availability of bandwidth is already much larger than needed for this task and the CPU power can be increased according to our need, the online part will follow the evolution of the ATLAS TDAQ architecture. In particular, the current model foresees the merging of the level-2 and event filtering processes on the same nodes, allowing the simplification of the system and a more flexible and dynamic resource distribution. Two possible architectures are possible to comply with this model; possible implementation will be discussed.
id cern-1609560
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2013
record_format invenio
spelling cern-16095602019-09-30T06:29:59Zhttp://cds.cern.ch/record/1609560engDe Salvo, AA quasi-online distributed data processing on WAN: the ATLAS muon calibration systemDetectors and Experimental TechniquesIn the Atlas experiment, the calibration of the precision tracking chambers of the muon detector is very demanding, since the rate of muon tracks required to get a complete calibration in homogeneous conditions and to feed prompt reconstruction with fresh constants is very high (several hundreds Hz for 8-10 hours runs). The calculation of calibration constants is highly CPU consuming. In order to fulfill the requirement of completing the cycle and having the final constants available within 24 hours, distributed resources at Tier-2 centers have been allocated. The best place to get muon tracks suitable for detector calibration is the second level trigger, where the pre-selection of data sitting in a limited region by the first level trigger via the Region of Interest mechanism allows selecting all the hits from a single track in a limited region of the detector. Online data extraction allows calibration data collection without performing special runs. Small event pseudo-fragments (about 0.5 kB) built at the muon level-1 rate (2-3 kHz at the beginning of 2012 run, to become 10-12 kHz at maximum LHC luminosity) are then collected in parallel by a dedicated system, without affecting the main data taking, and sent to the Tier-0 computing center at CERN. The computing resources needed to calculate the calibration constants are distributed through three calibration centers (Rome, Munich, Ann Arbor) for the tracking device and one (Napoli) for the trigger chambers. From Tier-0, files are directly sent to the calibration centers through the ATLAS Data Distribution Manager. At the calibration centers, data is split per trigger tower and distributed to computing nodes for concurrent processing (~250 cores are currently used at each center). A two-stage processing is performed, the first stage reconstructing tracks and creating ntuples, the second one calculating constants. The calibration parameters are then stored in the local calibration database and replicated to the main condition database at CERN, which makes them available for data analysis within 24 hours from data extraction. The architecture and performance of this system during the 2011-2012 data taking will be presented. This system will evolve in the next future to comply with the new stringent requirements of the LHC and ATLAS upgrade. If for the WAN distribution part the availability of bandwidth is already much larger than needed for this task and the CPU power can be increased according to our need, the online part will follow the evolution of the ATLAS TDAQ architecture. In particular, the current model foresees the merging of the level-2 and event filtering processes on the same nodes, allowing the simplification of the system and a more flexible and dynamic resource distribution. Two possible architectures are possible to comply with this model; possible implementation will be discussed.ATL-DAQ-SLIDE-2013-830oai:cds.cern.ch:16095602013-10-11
spellingShingle Detectors and Experimental Techniques
De Salvo, A
A quasi-online distributed data processing on WAN: the ATLAS muon calibration system
title A quasi-online distributed data processing on WAN: the ATLAS muon calibration system
title_full A quasi-online distributed data processing on WAN: the ATLAS muon calibration system
title_fullStr A quasi-online distributed data processing on WAN: the ATLAS muon calibration system
title_full_unstemmed A quasi-online distributed data processing on WAN: the ATLAS muon calibration system
title_short A quasi-online distributed data processing on WAN: the ATLAS muon calibration system
title_sort quasi-online distributed data processing on wan: the atlas muon calibration system
topic Detectors and Experimental Techniques
url http://cds.cern.ch/record/1609560
work_keys_str_mv AT desalvoa aquasionlinedistributeddataprocessingonwantheatlasmuoncalibrationsystem
AT desalvoa quasionlinedistributeddataprocessingonwantheatlasmuoncalibrationsystem