Cargando…
Multi-Particle Reconstruction with Dynamic Graph Neural Networks
The task of finding the incident particles from the sensor deposits they leave on particle detectors is called event or particle reconstruction. The sensor deposits can be represented generically as a point cloud, with each point corresponding to three spatial dimensions of the sensor location, the...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2023
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/2863014 |
Sumario: | The task of finding the incident particles from the sensor deposits they leave on particle detectors is called event or particle reconstruction. The sensor deposits can be represented generically as a point cloud, with each point corresponding to three spatial dimensions of the sensor location, the energy deposit, and occasionally, also the time of the deposit. As particle detectors become increasingly more complex, ever-more sophisticated methods are needed to perform particle reconstruction. An example is the ongoing High Luminosity (HL) upgrade of the Large Hadron Collider (HL-LHC). The HL-HLC is the most significant milestone in experimental particle physics and aims to deliver an order of magnitude more data rate compared to the current LHC. As part of the upgrade, the endcap calorimeters of the Compact Muon Solenoid (CMS) experiment -- one of the two largest and general-purpose detectors at the LHC -- will be replaced by the radiation-hard High Granularity Calorimeter (HGCAL). The HGCAL will contain $\sim6$ million sensors to achieve the spatial resolution required for reconstructing individual particles in HL-LHC conditions. It has an irregular geometry due to its hexagonal sensors, with sizes varying across the longitudinal and transverse axes. Further, it generates sparse data as less than $10$% of the sensors register positive energy. Reconstruction in this environment, where highly irregular patterns of hits are left by the particles, is an unprecedentedly intractable and compute-intensive pattern recognition problem. This motivates the use of parallelisation-friendly deep learning approaches. More traditional deep learning methods, however, are not feasible for the HGCAL because a regular grid-like structure is assumed in those approaches. In this thesis, a reconstruction algorithm based on a dynamic graph neural network called GravNet is presented. The network is paired with a segmentation technique, Object Condensation, to first perform point-cloud segmentation on the detector hits. The property-prediction capability of the Object Condensation approach is then used for energy regression of the reconstructed particles. A range of experiments are conducted to show that this method works well in conditions expected in the HGCAL i.e., with $200$ simultaneous proton-proton collisions. Parallel algorithms based on Nvidia CUDA are also presented to address the computational challenges of the graph neural network discussed in this thesis. With the optimisations, reconstruction can be performed by this method in approximately $2$ seconds which is suitable considering the computational constraints at the LHC. The presented method is the first-ever example of deep learning based end-to-end calorimetric reconstruction in high occupancy environments. This sets the stage for the next era of particle reconstruction, which is expected to be end-to-end. While this thesis is focused on the HGCAL, the method discussed is general and can be extended not only to other calorimeters but also to other tasks such as track reconstruction. |
---|