Cargando…

Reaching new peaks for the future of the CMS HTCondor Global Pool

The CMS experiment at CERN employs a distributed computing infrastructure to satisfy its data processing and simulation needs. The CMS Submission Infrastructure team manages a dynamic HTCondor pool, aggregating mainly Grid clusters worldwide, but also HPC, Cloud and opportunistic resources. This CMS...

Descripción completa

Detalles Bibliográficos
Autores principales: Perez-Calero Yzquierdo, Antonio Maria, Mascheroni, Marco, Acosta Flechas, Maria, Dost, Jeffrey Michael, Haleem, Saqib, Hurtado Anampa, Kenyi Paolo, Khan, Farrukh Aftab, Kizinevic, Edita, Peregonow, Nicholas
Lenguaje:eng
Publicado: 2021
Materias:
Acceso en línea:https://dx.doi.org/10.1051/epjconf/202125102055
http://cds.cern.ch/record/2797501
_version_ 1780972395738169344
author Perez-Calero Yzquierdo, Antonio Maria
Mascheroni, Marco
Acosta Flechas, Maria
Dost, Jeffrey Michael
Haleem, Saqib
Hurtado Anampa, Kenyi Paolo
Khan, Farrukh Aftab
Kizinevic, Edita
Peregonow, Nicholas
author_facet Perez-Calero Yzquierdo, Antonio Maria
Mascheroni, Marco
Acosta Flechas, Maria
Dost, Jeffrey Michael
Haleem, Saqib
Hurtado Anampa, Kenyi Paolo
Khan, Farrukh Aftab
Kizinevic, Edita
Peregonow, Nicholas
author_sort Perez-Calero Yzquierdo, Antonio Maria
collection CERN
description The CMS experiment at CERN employs a distributed computing infrastructure to satisfy its data processing and simulation needs. The CMS Submission Infrastructure team manages a dynamic HTCondor pool, aggregating mainly Grid clusters worldwide, but also HPC, Cloud and opportunistic resources. This CMS Global Pool, which currently involves over 70 computing sites worldwide and peaks at 350k CPU cores, is employed to successfully manage the simultaneous execution of up to 150k tasks. While the present infrastructure is sufficient to harness the current computing power scales, CMS latest estimates predict a noticeable expansion in the amount of CPU that will be required in order to cope with the massive data increase of the High-Luminosity LHC (HL-LHC) era, planned to start in 2027. This contribution presents the latest results of the CMS Submission Infrastructure team in exploring and expanding the scalability reach of our Global Pool, in order to preventively detect and overcome any barriers in relation to the HL-LHC goals, while maintaining high efficiency in our workload scheduling and resource utilization.
id cern-2797501
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2021
record_format invenio
spelling cern-27975012022-08-23T09:24:41Zdoi:10.1051/epjconf/202125102055http://cds.cern.ch/record/2797501engPerez-Calero Yzquierdo, Antonio MariaMascheroni, MarcoAcosta Flechas, MariaDost, Jeffrey MichaelHaleem, SaqibHurtado Anampa, Kenyi PaoloKhan, Farrukh AftabKizinevic, EditaPeregonow, NicholasReaching new peaks for the future of the CMS HTCondor Global PoolDetectors and Experimental TechniquesComputing and ComputersThe CMS experiment at CERN employs a distributed computing infrastructure to satisfy its data processing and simulation needs. The CMS Submission Infrastructure team manages a dynamic HTCondor pool, aggregating mainly Grid clusters worldwide, but also HPC, Cloud and opportunistic resources. This CMS Global Pool, which currently involves over 70 computing sites worldwide and peaks at 350k CPU cores, is employed to successfully manage the simultaneous execution of up to 150k tasks. While the present infrastructure is sufficient to harness the current computing power scales, CMS latest estimates predict a noticeable expansion in the amount of CPU that will be required in order to cope with the massive data increase of the High-Luminosity LHC (HL-LHC) era, planned to start in 2027. This contribution presents the latest results of the CMS Submission Infrastructure team in exploring and expanding the scalability reach of our Global Pool, in order to preventively detect and overcome any barriers in relation to the HL-LHC goals, while maintaining high efficiency in our workload scheduling and resource utilization.The CMS experiment at CERN employs a distributed computing infrastructure to satisfy its data processing and simulation needs. The CMS Submission Infrastructure team manages a dynamic HTCondor pool, aggregating mainly Grid clusters worldwide, but also HPC, Cloud and opportunistic resources. This CMS Global Pool, which currently involves over 70 computing sites worldwide and peaks at 350k CPU cores, is employed to successfully manage the simultaneous execution of up to 150k tasks. While the present infrastructure is sufficient to harness the current computing power scales, CMS latest estimates predict a noticeable expansion in the amount of CPU that will be required in order to cope with the massive data increase of the High-Luminosity LHC (HL-LHC) era, planned to start in 2027. This contribution presents the latest results of the CMS Submission Infrastructure team in exploring and expanding the scalability reach of our Global Pool, in order to preventively detect and overcome any barriers in relation to the HL-LHC goals, while maintaining high effciency in our workload scheduling and resource utilization.CMS-CR-2021-023oai:cds.cern.ch:27975012021-02-26
spellingShingle Detectors and Experimental Techniques
Computing and Computers
Perez-Calero Yzquierdo, Antonio Maria
Mascheroni, Marco
Acosta Flechas, Maria
Dost, Jeffrey Michael
Haleem, Saqib
Hurtado Anampa, Kenyi Paolo
Khan, Farrukh Aftab
Kizinevic, Edita
Peregonow, Nicholas
Reaching new peaks for the future of the CMS HTCondor Global Pool
title Reaching new peaks for the future of the CMS HTCondor Global Pool
title_full Reaching new peaks for the future of the CMS HTCondor Global Pool
title_fullStr Reaching new peaks for the future of the CMS HTCondor Global Pool
title_full_unstemmed Reaching new peaks for the future of the CMS HTCondor Global Pool
title_short Reaching new peaks for the future of the CMS HTCondor Global Pool
title_sort reaching new peaks for the future of the cms htcondor global pool
topic Detectors and Experimental Techniques
Computing and Computers
url https://dx.doi.org/10.1051/epjconf/202125102055
http://cds.cern.ch/record/2797501
work_keys_str_mv AT perezcaleroyzquierdoantoniomaria reachingnewpeaksforthefutureofthecmshtcondorglobalpool
AT mascheronimarco reachingnewpeaksforthefutureofthecmshtcondorglobalpool
AT acostaflechasmaria reachingnewpeaksforthefutureofthecmshtcondorglobalpool
AT dostjeffreymichael reachingnewpeaksforthefutureofthecmshtcondorglobalpool
AT haleemsaqib reachingnewpeaksforthefutureofthecmshtcondorglobalpool
AT hurtadoanampakenyipaolo reachingnewpeaksforthefutureofthecmshtcondorglobalpool
AT khanfarrukhaftab reachingnewpeaksforthefutureofthecmshtcondorglobalpool
AT kizinevicedita reachingnewpeaksforthefutureofthecmshtcondorglobalpool
AT peregonownicholas reachingnewpeaksforthefutureofthecmshtcondorglobalpool