Cargando…

Accelerating Machine Learning Inference with GPUs in ProtoDUNE Data Processing

We study the performance of a cloud-based GPU-accelerated inference server to speed up event reconstruction in neutrino data batch jobs. Using detector data from the ProtoDUNE experiment and employing the standard DUNE grid job submission tools, we attempt to reprocess the data by running several th...

Descripción completa

Detalles Bibliográficos
Autores principales: Cai, Tejin, Herner, Kenneth, Yang, Tingjun, Wang, Michael, Acosta Flechas, Maria, Harris, Philip, Holzman, Burt, Pedro, Kevin, Tran, Nhan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10611601/
https://www.ncbi.nlm.nih.gov/pubmed/37899771
http://dx.doi.org/10.1007/s41781-023-00101-0