Cargando…

Evolution of the ATLAS Trigger and Data Acquisition System

ATLAS is a Physics experiment that explores high-energy particle collisions at the Large Hadron Collider at CERN. It uses tens of millions of electronics channels to capture the outcome of the particle bunches crossing each other every 25 ns. Since reading out and storing the complete information is...

Descripción completa

Detalles Bibliográficos
Autor principal: Pozo Astigarraga, M E
Lenguaje:eng
Publicado: 2014
Materias:
Acceso en línea:http://cds.cern.ch/record/1751941
_version_ 1780943164290367488
author Pozo Astigarraga, M E
author_facet Pozo Astigarraga, M E
author_sort Pozo Astigarraga, M E
collection CERN
description ATLAS is a Physics experiment that explores high-energy particle collisions at the Large Hadron Collider at CERN. It uses tens of millions of electronics channels to capture the outcome of the particle bunches crossing each other every 25 ns. Since reading out and storing the complete information is not feasible (~100 TB/s), ATLAS makes use of a complex and highly distributed Trigger and Data Acquisition (TDAQ) system, in charge of selecting only interesting data and transporting those to permanent mass storage (~1 GB/s) for later analysis. The data reduction is carried out in two stages: first, custom electronics performs an initial level of data rejection for each bunch crossing based on partial and localized information. Only data corresponding to collisions passing this stage of selection will be actually read-out from the on-detector electronics. Then, a large computer farm (~17 k cores) analyses these data in real-time and decides which ones are worth being stored for Physics analysis. A large network allows to move the data from ~1800 front-end buffers to the location where they are processed, and from there to mass storage. The overall TDAQ system is embedded in a common software framework that allows to control, configure and monitor the data taking process. The experience gained during the first period of data taking of the ATLAS experiment (Run I, 2010-2012) has inspired a number of ideas for improvement of the TDAQ system that are being put in place during the so-called Long Shutdown 1 of the Large Hadron Collider (LHC), in 2013/14. This paper summarizes the main changes that have been applied to the ATLAS TDAQ system and highlights the expected performance and functional improvements that will be available for the LHC Run II. Particular emphasis will be put on the evolution of the software-based data selection and of the flow of data in the system. The reasons for the modified architectural and technical choices will be explained, and details will be provided on the simulation and testing approach used to validate this system.
id cern-1751941
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2014
record_format invenio
spelling cern-17519412019-09-30T06:29:59Zhttp://cds.cern.ch/record/1751941engPozo Astigarraga, M EEvolution of the ATLAS Trigger and Data Acquisition SystemParticle Physics - ExperimentATLAS is a Physics experiment that explores high-energy particle collisions at the Large Hadron Collider at CERN. It uses tens of millions of electronics channels to capture the outcome of the particle bunches crossing each other every 25 ns. Since reading out and storing the complete information is not feasible (~100 TB/s), ATLAS makes use of a complex and highly distributed Trigger and Data Acquisition (TDAQ) system, in charge of selecting only interesting data and transporting those to permanent mass storage (~1 GB/s) for later analysis. The data reduction is carried out in two stages: first, custom electronics performs an initial level of data rejection for each bunch crossing based on partial and localized information. Only data corresponding to collisions passing this stage of selection will be actually read-out from the on-detector electronics. Then, a large computer farm (~17 k cores) analyses these data in real-time and decides which ones are worth being stored for Physics analysis. A large network allows to move the data from ~1800 front-end buffers to the location where they are processed, and from there to mass storage. The overall TDAQ system is embedded in a common software framework that allows to control, configure and monitor the data taking process. The experience gained during the first period of data taking of the ATLAS experiment (Run I, 2010-2012) has inspired a number of ideas for improvement of the TDAQ system that are being put in place during the so-called Long Shutdown 1 of the Large Hadron Collider (LHC), in 2013/14. This paper summarizes the main changes that have been applied to the ATLAS TDAQ system and highlights the expected performance and functional improvements that will be available for the LHC Run II. Particular emphasis will be put on the evolution of the software-based data selection and of the flow of data in the system. The reasons for the modified architectural and technical choices will be explained, and details will be provided on the simulation and testing approach used to validate this system.ATL-DAQ-SLIDE-2014-546oai:cds.cern.ch:17519412014-08-26
spellingShingle Particle Physics - Experiment
Pozo Astigarraga, M E
Evolution of the ATLAS Trigger and Data Acquisition System
title Evolution of the ATLAS Trigger and Data Acquisition System
title_full Evolution of the ATLAS Trigger and Data Acquisition System
title_fullStr Evolution of the ATLAS Trigger and Data Acquisition System
title_full_unstemmed Evolution of the ATLAS Trigger and Data Acquisition System
title_short Evolution of the ATLAS Trigger and Data Acquisition System
title_sort evolution of the atlas trigger and data acquisition system
topic Particle Physics - Experiment
url http://cds.cern.ch/record/1751941
work_keys_str_mv AT pozoastigarragame evolutionoftheatlastriggeranddataacquisitionsystem