Cargando…

The Evolution of the Trigger and Data Acquisition System in the ATLAS Experiment

The ATLAS experiment, aimed at recording the results of LHC proton-proton collisions, is\nupgrading its Trigger and Data Acquisition (TDAQ) system during the current LHC first long\nshutdown. The purpose of such upgrade is to add robustness and flexibility to the selection and\nthe conveyance of the...

Descripción completa

Detalles Bibliográficos
Autor principal: Garelli, N
Lenguaje:eng
Publicado: 2013
Materias:
Acceso en línea:http://cds.cern.ch/record/1609564
Descripción
Sumario:The ATLAS experiment, aimed at recording the results of LHC proton-proton collisions, is\nupgrading its Trigger and Data Acquisition (TDAQ) system during the current LHC first long\nshutdown. The purpose of such upgrade is to add robustness and flexibility to the selection and\nthe conveyance of the physics data, simplify the maintenance of the infrastructure, exploit new\ntechnologies and, overall, make ATLAS data-taking capable of dealing with increasing event\nrates.\nThe TDAQ system used to date is organised in a three-level selection scheme, including a\nhardware-based first-level trigger and second- and third-level triggers implemented as separate\nsoftware systems distributed on commodity hardware nodes. The second-level trigger operates\nover limited regions of the detector, the so-called Regions-of-Interest (RoI). The third-level\ntrigger deals instead with complete events.\nWhile this architecture was successfully operated well beyond the original design goals, the\naccumulated experience stimulated interest to explore possible evolutions.\nWith higher luminosities, the required number and complexity of Level-1 triggers will increase in\norder to satisfy the physics goals of ATLAS, while keeping the total Level-1 rates at or below\n100kHz. The Central Trigger Processor will be upgraded to increase the number of manageable\ninputs and accommodate additional hardware for improved performance, and a new Topological\nProcessor will be included in the slice. This latter will apply selections based either on\ngeometrical information, like angles between jets/leptons, or even more complex observables to\nfurther optimize the selection at this trigger stage.\nConcerning the high-level trigger, the main step in the current plan is to deploy a single\nhomogeneous system, which merges the execution of the second and third trigger levels, still\nlogically separated, on a unique hardware node. This design has many advantages, among\nwhich: the radical simplification of the architecture, the flexible and automatically balanced\ndistribution of the computing resources, the sharing of code and services on nodes.\nFurthermore, the full treatment of the HLT selection on a single node enables both further\noptimisations, e.g. the caching of event fragments already collected for RoI-based processing,\nand new approaches giving better balancing of the selection steps before and after the event\nbuilding. Prototyping efforts already demonstrated many of these benefits.\nIn this paper, we report on the design and the development status of the upgraded trigger\nsystem, with particular attention to the tests currently on-going to identify the required\nperformance and to spot its possible limitations.