Cargando…
Distributed training and scalability for the particle clustering method UCluster
<!--HTML-->In recent years, machine learning methods have become increasingly important for the experiments of the Large Hadron Collider (LHC). They are utilized in everything from trigger systems to reconstruction to data analysis. The recent UCluster method is a general model providing unsup...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2021
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/2767279 |
_version_ | 1780971288689377280 |
---|---|
author | Sunneborn Gudnadottir, Olga |
author_facet | Sunneborn Gudnadottir, Olga |
author_sort | Sunneborn Gudnadottir, Olga |
collection | CERN |
description | <!--HTML-->In recent years, machine learning methods have become increasingly important for the experiments of the Large Hadron Collider (LHC). They are utilized in everything from trigger systems to reconstruction to data analysis. The recent UCluster method is a general model providing unsupervised clustering of particle physics data, that can be easily modified for a variety of different tasks. In the current paper, we improve on the UCluster method by adding the option of training the model in a scalable and distributed fashion, which extends its usefulness even further. UCluster combines the graph-based neural network ABCnet with a clustering step, using a combined loss function to train. It was written in TensorFlow v1.14 and has previously been trained on a single GPU. It shows a clustering accuracy of 81% when applied to the problem of multiclass classification of simulated jet events. Our implementation adds the distributed training functionality by utilizing the Horovod distributed training framework, which necessitated a migration of the code to TensorFlow v2. Together with using parquet files for splitting data up between different nodes, the distributed training makes the model scalable to any amount of input data, something that will be essential for use with real LHC datasets. We find that the model is well suited for distributed training, with the training time decreasing in direct relation to the number of GPU's used. |
id | cern-2767279 |
institution | Organización Europea para la Investigación Nuclear |
language | eng |
publishDate | 2021 |
record_format | invenio |
spelling | cern-27672792022-11-02T22:25:36Zhttp://cds.cern.ch/record/2767279engSunneborn Gudnadottir, OlgaDistributed training and scalability for the particle clustering method UCluster25th International Conference on Computing in High Energy & Nuclear PhysicsConferences<!--HTML-->In recent years, machine learning methods have become increasingly important for the experiments of the Large Hadron Collider (LHC). They are utilized in everything from trigger systems to reconstruction to data analysis. The recent UCluster method is a general model providing unsupervised clustering of particle physics data, that can be easily modified for a variety of different tasks. In the current paper, we improve on the UCluster method by adding the option of training the model in a scalable and distributed fashion, which extends its usefulness even further. UCluster combines the graph-based neural network ABCnet with a clustering step, using a combined loss function to train. It was written in TensorFlow v1.14 and has previously been trained on a single GPU. It shows a clustering accuracy of 81% when applied to the problem of multiclass classification of simulated jet events. Our implementation adds the distributed training functionality by utilizing the Horovod distributed training framework, which necessitated a migration of the code to TensorFlow v2. Together with using parquet files for splitting data up between different nodes, the distributed training makes the model scalable to any amount of input data, something that will be essential for use with real LHC datasets. We find that the model is well suited for distributed training, with the training time decreasing in direct relation to the number of GPU's used.oai:cds.cern.ch:27672792021 |
spellingShingle | Conferences Sunneborn Gudnadottir, Olga Distributed training and scalability for the particle clustering method UCluster |
title | Distributed training and scalability for the particle clustering method UCluster |
title_full | Distributed training and scalability for the particle clustering method UCluster |
title_fullStr | Distributed training and scalability for the particle clustering method UCluster |
title_full_unstemmed | Distributed training and scalability for the particle clustering method UCluster |
title_short | Distributed training and scalability for the particle clustering method UCluster |
title_sort | distributed training and scalability for the particle clustering method ucluster |
topic | Conferences |
url | http://cds.cern.ch/record/2767279 |
work_keys_str_mv | AT sunneborngudnadottirolga distributedtrainingandscalabilityfortheparticleclusteringmethoducluster AT sunneborngudnadottirolga 25thinternationalconferenceoncomputinginhighenergynuclearphysics |