Cargando…
Results on LHCb Data Challenge 06
The Large Hadron Collider (LHC) at CERN is the front end machine for the high-energy physics (HEP) and will start operating in 2007. The expected amount of data that will be produced and that has to be analyzed is unprecedented. LHCb, one of the large experiments at the LHC, moved toward grid techno...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2007
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/1120791 |
_version_ | 1780914566077612032 |
---|---|
author | Santinelli, R |
author_facet | Santinelli, R |
author_sort | Santinelli, R |
collection | CERN |
description | The Large Hadron Collider (LHC) at CERN is the front end machine for the high-energy physics (HEP) and will start operating in 2007. The expected amount of data that will be produced and that has to be analyzed is unprecedented. LHCb, one of the large experiments at the LHC, moved toward grid technologies to cope with their requirements. The integration of the experiment specific computing framework into the underlying production grid has not been always effortless. Grid technologies represent the only way to deal with HEP today’s computing needs. The complexity of these new techniques brought the need of designing, for each experiment, a model for processing and analyzing the data. The 2006 data challenge – LHCb DC06 – is the latest of a series of big activities on the Grid and represents the final benchmark before the real data taking. Its goal is validating the computing model and the computing framework of LHCb but it is also the last opportunity for exercising the whole simulation chain on WLCG resources and testing the readiness of all resources involved. Over the past few years, LHCb has always been one of the top users of LCG-EGEE resources gathering considerable experience in distributed computing at a large scale. The central part of the system is DIRAC (Distributed Infrastructure with Remote Agent Control). It is the experiment gateway to the grid and its keywords are resilience, reliability and redundancy. The achieved maturity of the LHCb computing framework from one side and the knowledge acquired on Grid technologies by the community placed the DC06 experience in a privileged position. It represents indeed a lucid and objective outlook on the health status of the Grid few months before the first beam collisions. The aim of this work is to present this experience, its original objectives and how these have been adjusted in the time reflecting problematic encountered. A description of the DIRAC system and how it is evolving to cope with the limits of the back-end system, a discussion of the performances achieved, and the analysis of the problems observed are also given. The DC06 has started in August 2006. Six months after its start, DC06 is perceived as the set of all Monte Carlo production, reprocessing and analysis activities on WLCG. It highlighted that the WLCG service is enhancing but there is still big room for improvements. Data Management System and Workload System on WLCG are still too instable and inefficient. The reliability of all these services can be improved by instituting an operational infrastructure that monitors and guarantees problems are correctly addressed and fixed. Alternatively LHCb think that resources providers have to be motivated in chasing up issues. We triggered several times debugging sessions in tight collaboration with site managers that proved a prompt and efficient reaction. |
id | cern-1120791 |
institution | Organización Europea para la Investigación Nuclear |
language | eng |
publishDate | 2007 |
record_format | invenio |
spelling | cern-11207912019-09-30T06:29:59Zhttp://cds.cern.ch/record/1120791engSantinelli, RResults on LHCb Data Challenge 06Detectors and Experimental TechniquesComputing and ComputersThe Large Hadron Collider (LHC) at CERN is the front end machine for the high-energy physics (HEP) and will start operating in 2007. The expected amount of data that will be produced and that has to be analyzed is unprecedented. LHCb, one of the large experiments at the LHC, moved toward grid technologies to cope with their requirements. The integration of the experiment specific computing framework into the underlying production grid has not been always effortless. Grid technologies represent the only way to deal with HEP today’s computing needs. The complexity of these new techniques brought the need of designing, for each experiment, a model for processing and analyzing the data. The 2006 data challenge – LHCb DC06 – is the latest of a series of big activities on the Grid and represents the final benchmark before the real data taking. Its goal is validating the computing model and the computing framework of LHCb but it is also the last opportunity for exercising the whole simulation chain on WLCG resources and testing the readiness of all resources involved. Over the past few years, LHCb has always been one of the top users of LCG-EGEE resources gathering considerable experience in distributed computing at a large scale. The central part of the system is DIRAC (Distributed Infrastructure with Remote Agent Control). It is the experiment gateway to the grid and its keywords are resilience, reliability and redundancy. The achieved maturity of the LHCb computing framework from one side and the knowledge acquired on Grid technologies by the community placed the DC06 experience in a privileged position. It represents indeed a lucid and objective outlook on the health status of the Grid few months before the first beam collisions. The aim of this work is to present this experience, its original objectives and how these have been adjusted in the time reflecting problematic encountered. A description of the DIRAC system and how it is evolving to cope with the limits of the back-end system, a discussion of the performances achieved, and the analysis of the problems observed are also given. The DC06 has started in August 2006. Six months after its start, DC06 is perceived as the set of all Monte Carlo production, reprocessing and analysis activities on WLCG. It highlighted that the WLCG service is enhancing but there is still big room for improvements. Data Management System and Workload System on WLCG are still too instable and inefficient. The reliability of all these services can be improved by instituting an operational infrastructure that monitors and guarantees problems are correctly addressed and fixed. Alternatively LHCb think that resources providers have to be motivated in chasing up issues. We triggered several times debugging sessions in tight collaboration with site managers that proved a prompt and efficient reaction.oai:cds.cern.ch:11207912007 |
spellingShingle | Detectors and Experimental Techniques Computing and Computers Santinelli, R Results on LHCb Data Challenge 06 |
title | Results on LHCb Data Challenge 06 |
title_full | Results on LHCb Data Challenge 06 |
title_fullStr | Results on LHCb Data Challenge 06 |
title_full_unstemmed | Results on LHCb Data Challenge 06 |
title_short | Results on LHCb Data Challenge 06 |
title_sort | results on lhcb data challenge 06 |
topic | Detectors and Experimental Techniques Computing and Computers |
url | http://cds.cern.ch/record/1120791 |
work_keys_str_mv | AT santinellir resultsonlhcbdatachallenge06 |