Cargando…

CMS Distributed Data Analysis Challenges

In Spring 2004 CMS will undertake a 100 TeraByte-scale Data Challenge (DC04) as part of a series of challenges in preparation for running at CERN's Large Hadron Collider. During 1 month, DC04 must demonstrate the ability of the computing and software to cope with a sustained event data-taking r...

Descripción completa

Detalles Bibliográficos
Autor principal: Grandi, C
Lenguaje:eng
Publicado: 2004
Materias:
Acceso en línea:http://cds.cern.ch/record/787495
_version_ 1780904431596863488
author Grandi, C
author_facet Grandi, C
author_sort Grandi, C
collection CERN
description In Spring 2004 CMS will undertake a 100 TeraByte-scale Data Challenge (DC04) as part of a series of challenges in preparation for running at CERN's Large Hadron Collider. During 1 month, DC04 must demonstrate the ability of the computing and software to cope with a sustained event data-taking rate of 25 Hz, for a total of 50 million events. The emphasis of DC04 is on the validation of the first pass reconstruction and storage systems at CERN and the streaming of events to a distributed system of Tier-1, and Tier-2 sites worldwide where typical analysis tasks will be performed. It is expected that the LHC Computing Grid project will provide a set of grid services suitable for use in a real production environment, as part of this data challenge. The results of this challenge will be used to define the CMS software and computing systems in their Technical Design Report.
id cern-787495
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2004
record_format invenio
spelling cern-7874952019-09-30T06:29:59Zhttp://cds.cern.ch/record/787495engGrandi, CCMS Distributed Data Analysis ChallengesDetectors and Experimental TechniquesIn Spring 2004 CMS will undertake a 100 TeraByte-scale Data Challenge (DC04) as part of a series of challenges in preparation for running at CERN's Large Hadron Collider. During 1 month, DC04 must demonstrate the ability of the computing and software to cope with a sustained event data-taking rate of 25 Hz, for a total of 50 million events. The emphasis of DC04 is on the validation of the first pass reconstruction and storage systems at CERN and the streaming of events to a distributed system of Tier-1, and Tier-2 sites worldwide where typical analysis tasks will be performed. It is expected that the LHC Computing Grid project will provide a set of grid services suitable for use in a real production environment, as part of this data challenge. The results of this challenge will be used to define the CMS software and computing systems in their Technical Design Report.CMS-CR-2004-008oai:cds.cern.ch:7874952004-03-08
spellingShingle Detectors and Experimental Techniques
Grandi, C
CMS Distributed Data Analysis Challenges
title CMS Distributed Data Analysis Challenges
title_full CMS Distributed Data Analysis Challenges
title_fullStr CMS Distributed Data Analysis Challenges
title_full_unstemmed CMS Distributed Data Analysis Challenges
title_short CMS Distributed Data Analysis Challenges
title_sort cms distributed data analysis challenges
topic Detectors and Experimental Techniques
url http://cds.cern.ch/record/787495
work_keys_str_mv AT grandic cmsdistributeddataanalysischallenges