Cargando…

Optimization of the HLT resource consumption in the LHCb experiment

Today's computing elements for software based high level trigger processing (HLT) are based on nodes with multiple cores. Using process based parallelization to filter particle collisions from the LHCb experiment on such nodes leads to expensive consumption of memory and hence significant cost...

Descripción completa

Detalles Bibliográficos
Autores principales: Frank, M, Gaspar, C, van Herwijnen, E, Jost, B, Neufeld, N, Schwemmer, R
Lenguaje:eng
Publicado: 2012
Materias:
Acceso en línea:https://dx.doi.org/10.1088/1742-6596/396/1/012021
http://cds.cern.ch/record/1565932
Descripción
Sumario:Today's computing elements for software based high level trigger processing (HLT) are based on nodes with multiple cores. Using process based parallelization to filter particle collisions from the LHCb experiment on such nodes leads to expensive consumption of memory and hence significant cost increase. In the following an approach is presented to both minimize the resource consumption of the filter applications and to reduce the startup time. Described is the duplication of threads and the handling of files open in read-write mode when duplicating filter processes and the possibility to bootstrap the event filter applications directly from preconfigured checkpoint files. This led to a reduced memory consumption of roughly 60% in the nodes of the LHCb HLT farm and an improved startup time of a factor 10.