Cargando…
Development of artificial intelligence algorithms adapted to big data processing in embedded (FPGAs) trigger and data acquisition systems at the LHC
The Standard Model of particle physics is completed after the discovery of the Higgs boson at the Large Hadron Collider (LHC) in 2012. Discovering new physics beyond the Standard Model and probing the newly discovered Higgs sector are two of the most important goals of current and future particle ph...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2023
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/2875588 |
_version_ | 1780978901599649792 |
---|---|
author | Laatu, Lauri Antti Olavi |
author_facet | Laatu, Lauri Antti Olavi |
author_sort | Laatu, Lauri Antti Olavi |
collection | CERN |
description | The Standard Model of particle physics is completed after the discovery of the Higgs boson at the Large Hadron Collider (LHC) in 2012. Discovering new physics beyond the Standard Model and probing the newly discovered Higgs sector are two of the most important goals of current and future particle physics experiments. In 2026-2029, the LHC will undergo an upgrade to increase its instantaneous luminosity by a factor of 5-7 with respect to its design luminosity. This upgrade will mark the beginning of the High Luminosity LHC (HL-LHC) era. Concurrently, the ATLAS and the CMS detectors will be upgraded to cope with the increased LHC luminosity. The ATLAS liquid argon (LAr) calorimeter measures the energies of particles produced in proton-proton collisions at the LHC. The LAr calorimeter readout electronics will be replaced to prepare it for the HL-LHC era. This will allow it to run at a higher trigger rate and have increased granularity at the trigger level. The energy deposited in the LAr calorimeter is reconstructed out of the electronic pulse signal using the optimal filtering algorithm. The energy is computed in real-time using custom electronic boards based on Field Programmable Gate Arrays (FPGAs). FPGAs are chosen due to their ability to process large amounts of data with low latency which is a requirement of the ATLAS trigger system. The increased LHC luminosity will lead to a high rate of simultaneous multiple proton-proton collisions (pileup) that results in a significant degradation of the energy resolution computed by the optimal filtering algorithm. Computing the energy with high precision is of utmost importance to achieve the physics goals of the ATLAS experiment at the HL-LHC. Recent advances in deep learning coupled with the increased computing capacity of FPGAs makes deep learning algorithms promising tools to replace the existing optimal filtering algorithms. In this dissertation, recurrent neural networks (RNNs) are developed to compute the energy deposited in the LAr calorimeter. Long-ShortTerm-Memory (LSTM) and simple RNNs are investigated. The parameters of these neural networks are studied in detail to optimize the performance. The developed networks are shown to outperform the optimal filtering algorithms. The models are further optimized to be deployed on FPGAs by quantization and compression methods which are shown to reduce the resource consumption with minimal effect on the performance. The LAr calorimeter is composed of 182000 individual channels for which we need to compute the deposited energies. Training 182000 different neural networks is not practically feasible. A new method based on unsupervised learning is developed to form clusters of channels with similar electronic pulse signals, which allows the use of the same neural network for all channels in one cluster. This method reduces the number of needed neural networks to about 100 making it possible to cover the full detector with these advanced algorithms. |
id | cern-2875588 |
institution | Organización Europea para la Investigación Nuclear |
language | eng |
publishDate | 2023 |
record_format | invenio |
spelling | cern-28755882023-10-17T18:55:31Zhttp://cds.cern.ch/record/2875588engLaatu, Lauri Antti OlaviDevelopment of artificial intelligence algorithms adapted to big data processing in embedded (FPGAs) trigger and data acquisition systems at the LHCDetectors and Experimental TechniquesThe Standard Model of particle physics is completed after the discovery of the Higgs boson at the Large Hadron Collider (LHC) in 2012. Discovering new physics beyond the Standard Model and probing the newly discovered Higgs sector are two of the most important goals of current and future particle physics experiments. In 2026-2029, the LHC will undergo an upgrade to increase its instantaneous luminosity by a factor of 5-7 with respect to its design luminosity. This upgrade will mark the beginning of the High Luminosity LHC (HL-LHC) era. Concurrently, the ATLAS and the CMS detectors will be upgraded to cope with the increased LHC luminosity. The ATLAS liquid argon (LAr) calorimeter measures the energies of particles produced in proton-proton collisions at the LHC. The LAr calorimeter readout electronics will be replaced to prepare it for the HL-LHC era. This will allow it to run at a higher trigger rate and have increased granularity at the trigger level. The energy deposited in the LAr calorimeter is reconstructed out of the electronic pulse signal using the optimal filtering algorithm. The energy is computed in real-time using custom electronic boards based on Field Programmable Gate Arrays (FPGAs). FPGAs are chosen due to their ability to process large amounts of data with low latency which is a requirement of the ATLAS trigger system. The increased LHC luminosity will lead to a high rate of simultaneous multiple proton-proton collisions (pileup) that results in a significant degradation of the energy resolution computed by the optimal filtering algorithm. Computing the energy with high precision is of utmost importance to achieve the physics goals of the ATLAS experiment at the HL-LHC. Recent advances in deep learning coupled with the increased computing capacity of FPGAs makes deep learning algorithms promising tools to replace the existing optimal filtering algorithms. In this dissertation, recurrent neural networks (RNNs) are developed to compute the energy deposited in the LAr calorimeter. Long-ShortTerm-Memory (LSTM) and simple RNNs are investigated. The parameters of these neural networks are studied in detail to optimize the performance. The developed networks are shown to outperform the optimal filtering algorithms. The models are further optimized to be deployed on FPGAs by quantization and compression methods which are shown to reduce the resource consumption with minimal effect on the performance. The LAr calorimeter is composed of 182000 individual channels for which we need to compute the deposited energies. Training 182000 different neural networks is not practically feasible. A new method based on unsupervised learning is developed to form clusters of channels with similar electronic pulse signals, which allows the use of the same neural network for all channels in one cluster. This method reduces the number of needed neural networks to about 100 making it possible to cover the full detector with these advanced algorithms.CERN-THESIS-2023-198oai:cds.cern.ch:28755882023-10-14T00:34:52Z |
spellingShingle | Detectors and Experimental Techniques Laatu, Lauri Antti Olavi Development of artificial intelligence algorithms adapted to big data processing in embedded (FPGAs) trigger and data acquisition systems at the LHC |
title | Development of artificial intelligence algorithms adapted to big data processing in embedded (FPGAs) trigger and data acquisition systems at the LHC |
title_full | Development of artificial intelligence algorithms adapted to big data processing in embedded (FPGAs) trigger and data acquisition systems at the LHC |
title_fullStr | Development of artificial intelligence algorithms adapted to big data processing in embedded (FPGAs) trigger and data acquisition systems at the LHC |
title_full_unstemmed | Development of artificial intelligence algorithms adapted to big data processing in embedded (FPGAs) trigger and data acquisition systems at the LHC |
title_short | Development of artificial intelligence algorithms adapted to big data processing in embedded (FPGAs) trigger and data acquisition systems at the LHC |
title_sort | development of artificial intelligence algorithms adapted to big data processing in embedded (fpgas) trigger and data acquisition systems at the lhc |
topic | Detectors and Experimental Techniques |
url | http://cds.cern.ch/record/2875588 |
work_keys_str_mv | AT laatulaurianttiolavi developmentofartificialintelligencealgorithmsadaptedtobigdataprocessinginembeddedfpgastriggeranddataacquisitionsystemsatthelhc |