Cargando…

Development of GPU-Accelerated Trigger Algorithms for the ATLAS Experiment at the LHC

The ATLAS Experiment at the LHC, at CERN, is designed to detect several physical processes that happen when the particles collide. Due to the high collision rate, which will increase even further with the High Luminosity LHC Upgrade, a triggering system must be employed to select the events that wil...

Descripción completa

Detalles Bibliográficos
Autor principal: Dos Santos Fernandes, Nuno
Lenguaje:eng
Publicado: 2021
Materias:
Acceso en línea:http://cds.cern.ch/record/2790668
Descripción
Sumario:The ATLAS Experiment at the LHC, at CERN, is designed to detect several physical processes that happen when the particles collide. Due to the high collision rate, which will increase even further with the High Luminosity LHC Upgrade, a triggering system must be employed to select the events that will be stored. Since that system is used in real-time, there are significant constraints to the run time of the algorithms that comprise it. Thus, the feasibility of accelerating the execution of these algorithms should be considered, one possibility being the massive parallelism provided by Graphical Processing Units (GPU), namely through the CUDA framework. The triggering system depends, among other information, on the reconstruction of the electromagnetic showers that form within the ATLAS detector’s calorimeters, which is based on the energy that is de- posited in each of these calorimeters’ cells. The present work is focused on the algorithm known as Topological Clustering, which groups the cells according to the signal-to-noise ratio of the deposited energy, and in particular on a variant which is better suited to GPU programming, the Topo-Automaton Clustering algorithm. Several implementation strategies for the algorithm are compared in order to optimize its run time, and the reconstructed physical properties of the cell clusters are analysed to validate the final implementa- tion. The results suggest an improvement of the run time by a factor between 3.5 and 5.5 on average (de- pending on the kind of event), though less than 20% of that time corresponds to the algorithm in itself.