Cargando…
Accelerating Machine Learning Inference with GPUs in ProtoDUNE Data Processing
We study the performance of a cloud-based GPU-accelerated inference server to speed up event reconstruction in neutrino data batch jobs. Using detector data from the ProtoDUNE experiment and employing the standard DUNE grid job submission tools, we attempt to reprocess the data by running several th...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer International Publishing
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10611601/ https://www.ncbi.nlm.nih.gov/pubmed/37899771 http://dx.doi.org/10.1007/s41781-023-00101-0 |
_version_ | 1785128525919944704 |
---|---|
author | Cai, Tejin Herner, Kenneth Yang, Tingjun Wang, Michael Acosta Flechas, Maria Harris, Philip Holzman, Burt Pedro, Kevin Tran, Nhan |
author_facet | Cai, Tejin Herner, Kenneth Yang, Tingjun Wang, Michael Acosta Flechas, Maria Harris, Philip Holzman, Burt Pedro, Kevin Tran, Nhan |
author_sort | Cai, Tejin |
collection | PubMed |
description | We study the performance of a cloud-based GPU-accelerated inference server to speed up event reconstruction in neutrino data batch jobs. Using detector data from the ProtoDUNE experiment and employing the standard DUNE grid job submission tools, we attempt to reprocess the data by running several thousand concurrent grid jobs, a rate we expect to be typical of current and future neutrino physics experiments. We process most of the dataset with the GPU version of our processing algorithm and the remainder with the CPU version for timing comparisons. We find that a 100-GPU cloud-based server is able to easily meet the processing demand, and that using the GPU version of the event processing algorithm is two times faster than processing these data with the CPU version when comparing to the newest CPUs in our sample. The amount of data transferred to the inference server during the GPU runs can overwhelm even the highest-bandwidth network switches, however, unless care is taken to observe network facility limits or otherwise distribute the jobs to multiple sites. We discuss the lessons learned from this processing campaign and several avenues for future improvements. |
format | Online Article Text |
id | pubmed-10611601 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Springer International Publishing |
record_format | MEDLINE/PubMed |
spelling | pubmed-106116012023-10-29 Accelerating Machine Learning Inference with GPUs in ProtoDUNE Data Processing Cai, Tejin Herner, Kenneth Yang, Tingjun Wang, Michael Acosta Flechas, Maria Harris, Philip Holzman, Burt Pedro, Kevin Tran, Nhan Comput Softw Big Sci Research We study the performance of a cloud-based GPU-accelerated inference server to speed up event reconstruction in neutrino data batch jobs. Using detector data from the ProtoDUNE experiment and employing the standard DUNE grid job submission tools, we attempt to reprocess the data by running several thousand concurrent grid jobs, a rate we expect to be typical of current and future neutrino physics experiments. We process most of the dataset with the GPU version of our processing algorithm and the remainder with the CPU version for timing comparisons. We find that a 100-GPU cloud-based server is able to easily meet the processing demand, and that using the GPU version of the event processing algorithm is two times faster than processing these data with the CPU version when comparing to the newest CPUs in our sample. The amount of data transferred to the inference server during the GPU runs can overwhelm even the highest-bandwidth network switches, however, unless care is taken to observe network facility limits or otherwise distribute the jobs to multiple sites. We discuss the lessons learned from this processing campaign and several avenues for future improvements. Springer International Publishing 2023-10-27 2023 /pmc/articles/PMC10611601/ /pubmed/37899771 http://dx.doi.org/10.1007/s41781-023-00101-0 Text en © This is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Research Cai, Tejin Herner, Kenneth Yang, Tingjun Wang, Michael Acosta Flechas, Maria Harris, Philip Holzman, Burt Pedro, Kevin Tran, Nhan Accelerating Machine Learning Inference with GPUs in ProtoDUNE Data Processing |
title | Accelerating Machine Learning Inference with GPUs in ProtoDUNE Data Processing |
title_full | Accelerating Machine Learning Inference with GPUs in ProtoDUNE Data Processing |
title_fullStr | Accelerating Machine Learning Inference with GPUs in ProtoDUNE Data Processing |
title_full_unstemmed | Accelerating Machine Learning Inference with GPUs in ProtoDUNE Data Processing |
title_short | Accelerating Machine Learning Inference with GPUs in ProtoDUNE Data Processing |
title_sort | accelerating machine learning inference with gpus in protodune data processing |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10611601/ https://www.ncbi.nlm.nih.gov/pubmed/37899771 http://dx.doi.org/10.1007/s41781-023-00101-0 |
work_keys_str_mv | AT caitejin acceleratingmachinelearninginferencewithgpusinprotodunedataprocessing AT hernerkenneth acceleratingmachinelearninginferencewithgpusinprotodunedataprocessing AT yangtingjun acceleratingmachinelearninginferencewithgpusinprotodunedataprocessing AT wangmichael acceleratingmachinelearninginferencewithgpusinprotodunedataprocessing AT acostaflechasmaria acceleratingmachinelearninginferencewithgpusinprotodunedataprocessing AT harrisphilip acceleratingmachinelearninginferencewithgpusinprotodunedataprocessing AT holzmanburt acceleratingmachinelearninginferencewithgpusinprotodunedataprocessing AT pedrokevin acceleratingmachinelearninginferencewithgpusinprotodunedataprocessing AT trannhan acceleratingmachinelearninginferencewithgpusinprotodunedataprocessing |