Cargando…
Hyperparameter optimization of data-driven AI models on HPC systems
In the European Center of Excellence in Exascale Computing ”Research on AI- and Simulation-Based Engineering at Exascale” (CoE RAISE), researchers develop novel, scalable AI technologies towards Exascale. This work exercises High Performance Computing resources to perform large-scale hyperparameter...
Autores principales: | , , |
---|---|
Lenguaje: | eng |
Publicado: |
2023
|
Materias: | |
Acceso en línea: | https://dx.doi.org/10.1088/1742-6596/2438/1/012092 http://cds.cern.ch/record/2871810 |
_version_ | 1780978567018971136 |
---|---|
author | Wulff, Eric Girone, Maria Pata, Joosep |
author_facet | Wulff, Eric Girone, Maria Pata, Joosep |
author_sort | Wulff, Eric |
collection | CERN |
description | In the European Center of Excellence in Exascale Computing ”Research on AI- and Simulation-Based Engineering at Exascale” (CoE RAISE), researchers develop novel, scalable AI technologies towards Exascale. This work exercises High Performance Computing resources to perform large-scale hyperparameter optimization using distributed training on multiple compute nodes. This is part of RAISE’s work on data-driven use cases which leverages AI- and HPC cross-methods developed within the project. In response to the demand for parallelizable and resource efficient hyperparameter optimization methods, advanced hyperparameter search algorithms are benchmarked and compared. The evaluated algorithms, including Random Search, Hyperband and ASHA, are tested and compared in terms of both accuracy and accuracy per compute resources spent. As an example use case, a graph neural network model known as MLPF, developed for Machine Learned Particle-Flow reconstruction, acts as the base model for optimization. Results show that hyperparameter optimization significantly increased the performance of MLPF and that this would not have been possible without access to large-scale High Performance Computing resources. It is also shown that, in the case of MLPF, the ASHA algorithm in combination with Bayesian optimization gives the largest performance increase per compute resources spent out of the investigated algorithms. |
id | cern-2871810 |
institution | Organización Europea para la Investigación Nuclear |
language | eng |
publishDate | 2023 |
record_format | invenio |
spelling | cern-28718102023-09-20T21:01:02Zdoi:10.1088/1742-6596/2438/1/012092http://cds.cern.ch/record/2871810engWulff, EricGirone, MariaPata, JoosepHyperparameter optimization of data-driven AI models on HPC systemsphysics.data-ancs.LGData Analysis and StatisticsComputing and ComputersIn the European Center of Excellence in Exascale Computing ”Research on AI- and Simulation-Based Engineering at Exascale” (CoE RAISE), researchers develop novel, scalable AI technologies towards Exascale. This work exercises High Performance Computing resources to perform large-scale hyperparameter optimization using distributed training on multiple compute nodes. This is part of RAISE’s work on data-driven use cases which leverages AI- and HPC cross-methods developed within the project. In response to the demand for parallelizable and resource efficient hyperparameter optimization methods, advanced hyperparameter search algorithms are benchmarked and compared. The evaluated algorithms, including Random Search, Hyperband and ASHA, are tested and compared in terms of both accuracy and accuracy per compute resources spent. As an example use case, a graph neural network model known as MLPF, developed for Machine Learned Particle-Flow reconstruction, acts as the base model for optimization. Results show that hyperparameter optimization significantly increased the performance of MLPF and that this would not have been possible without access to large-scale High Performance Computing resources. It is also shown that, in the case of MLPF, the ASHA algorithm in combination with Bayesian optimization gives the largest performance increase per compute resources spent out of the investigated algorithms.In the European Center of Excellence in Exascale computing "Research on AI- and Simulation-Based Engineering at Exascale" (CoE RAISE), researchers develop novel, scalable AI technologies towards Exascale. This work exercises High Performance Computing resources to perform large-scale hyperparameter optimization using distributed training on multiple compute nodes. This is part of RAISE's work on data-driven use cases which leverages AI- and HPC cross-methods developed within the project. In response to the demand for parallelizable and resource efficient hyperparameter optimization methods, advanced hyperparameter search algorithms are benchmarked and compared. The evaluated algorithms, including Random Search, Hyperband and ASHA, are tested and compared in terms of both accuracy and accuracy per compute resources spent. As an example use case, a graph neural network model known as MLPF, developed for the task of Machine-Learned Particle-Flow reconstruction in High Energy Physics, acts as the base model for optimization. Results show that hyperparameter optimization significantly increased the performance of MLPF and that this would not have been possible without access to large-scale High Performance Computing resources. It is also shown that, in the case of MLPF, the ASHA algorithm in combination with Bayesian optimization gives the largest performance increase per compute resources spent out of the investigated algorithms.arXiv:2203.01112oai:cds.cern.ch:28718102023 |
spellingShingle | physics.data-an cs.LG Data Analysis and Statistics Computing and Computers Wulff, Eric Girone, Maria Pata, Joosep Hyperparameter optimization of data-driven AI models on HPC systems |
title | Hyperparameter optimization of data-driven AI models on HPC systems |
title_full | Hyperparameter optimization of data-driven AI models on HPC systems |
title_fullStr | Hyperparameter optimization of data-driven AI models on HPC systems |
title_full_unstemmed | Hyperparameter optimization of data-driven AI models on HPC systems |
title_short | Hyperparameter optimization of data-driven AI models on HPC systems |
title_sort | hyperparameter optimization of data-driven ai models on hpc systems |
topic | physics.data-an cs.LG Data Analysis and Statistics Computing and Computers |
url | https://dx.doi.org/10.1088/1742-6596/2438/1/012092 http://cds.cern.ch/record/2871810 |
work_keys_str_mv | AT wulfferic hyperparameteroptimizationofdatadrivenaimodelsonhpcsystems AT gironemaria hyperparameteroptimizationofdatadrivenaimodelsonhpcsystems AT patajoosep hyperparameteroptimizationofdatadrivenaimodelsonhpcsystems |