Cargando…
CMS distributed analysis infrastructure and operations: experience with the first LHC data
The CMS distributed analysis infrastructure represents a heterogeneous pool of resources distributed across several continents. The resources are harnessed using glite and glidein-based work load management systems (WMS). We provide the operational experience of the analysis workflows using CRAB- ba...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2010
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/1319362 |
_version_ | 1780921456791650304 |
---|---|
author | Vaandering, Eric Wayne |
author_facet | Vaandering, Eric Wayne |
author_sort | Vaandering, Eric Wayne |
collection | CERN |
description | The CMS distributed analysis infrastructure represents a heterogeneous pool of resources distributed across several continents. The resources are harnessed using glite and glidein-based work load management systems (WMS). We provide the operational experience of the analysis workflows using CRAB- based servers interfaced with the underlying WMS. The automatized interaction of the server with the WMS provides a successful analysis workflow. We present the operational experience as well as methods used in CMS to analyze the LHC data. The interaction with CMS Run-registry for Run and luminosity block selections via CRAB is discussed. The variations of different workflows during the LHC data-taking period and the lessons drawn from this experience are also outlined. |
id | cern-1319362 |
institution | Organización Europea para la Investigación Nuclear |
language | eng |
publishDate | 2010 |
record_format | invenio |
spelling | cern-13193622019-09-30T06:29:59Zhttp://cds.cern.ch/record/1319362engVaandering, Eric WayneCMS distributed analysis infrastructure and operations: experience with the first LHC dataDetectors and Experimental TechniquesThe CMS distributed analysis infrastructure represents a heterogeneous pool of resources distributed across several continents. The resources are harnessed using glite and glidein-based work load management systems (WMS). We provide the operational experience of the analysis workflows using CRAB- based servers interfaced with the underlying WMS. The automatized interaction of the server with the WMS provides a successful analysis workflow. We present the operational experience as well as methods used in CMS to analyze the LHC data. The interaction with CMS Run-registry for Run and luminosity block selections via CRAB is discussed. The variations of different workflows during the LHC data-taking period and the lessons drawn from this experience are also outlined.CMS-CR-2010-219oai:cds.cern.ch:13193622010-11-15 |
spellingShingle | Detectors and Experimental Techniques Vaandering, Eric Wayne CMS distributed analysis infrastructure and operations: experience with the first LHC data |
title | CMS distributed analysis infrastructure and operations: experience with the first LHC data |
title_full | CMS distributed analysis infrastructure and operations: experience with the first LHC data |
title_fullStr | CMS distributed analysis infrastructure and operations: experience with the first LHC data |
title_full_unstemmed | CMS distributed analysis infrastructure and operations: experience with the first LHC data |
title_short | CMS distributed analysis infrastructure and operations: experience with the first LHC data |
title_sort | cms distributed analysis infrastructure and operations: experience with the first lhc data |
topic | Detectors and Experimental Techniques |
url | http://cds.cern.ch/record/1319362 |
work_keys_str_mv | AT vaanderingericwayne cmsdistributedanalysisinfrastructureandoperationsexperiencewiththefirstlhcdata |