Cargando…

Handling of time-critical Conditions Data in the CMS experiment - Experience of the first year of data taking

<!--HTML-->Data management for a wide category of non-event data plays a critical role in the operation of the CMS experiment. The processing chain (data taking-reconstruction-analysis) relies in the prompt availability of specific, time dependent data describing the state of the various detec...

Descripción completa

Detalles Bibliográficos
Autor principal: Govi, Giacomo
Lenguaje:eng
Publicado: 2012
Materias:
Acceso en línea:http://cds.cern.ch/record/1460686
_version_ 1780925255337902080
author Govi, Giacomo
author_facet Govi, Giacomo
author_sort Govi, Giacomo
collection CERN
description <!--HTML-->Data management for a wide category of non-event data plays a critical role in the operation of the CMS experiment. The processing chain (data taking-reconstruction-analysis) relies in the prompt availability of specific, time dependent data describing the state of the various detectors and their calibration parameters, which are treated separately from event data. The Condition Database system is the infrastructure established to handle these data and to make sure that they are available to both offline and online workflows. The Condition Data layout is designed such that the payload data (the Condition) is associated to an Interval Of Validity (IOV). The IOV allows accessing selectively the sets corresponding to specific intervals of time, run number or luminosity section. Both payloads and IOVs are stored in a cluster of relational database servers (Oracle) using an object-relational access approach. The strict requirements of security and isolation of the CMS online systems are imposing a redundant architecture to the database system. The master database is located in the experiment area within the online network, while a read-only replica is kept in sync via Oracle streaming in the CERN computing center and this is the one which is accessible by worldwide computing jobs. The synchronization of the condition data is performed with specific jobs deployed within the online networks, and with dedicated “drop-box” services. We will discuss the overall architecture of the system, the implementation choices and the experience gained in the first year of operation.
id cern-1460686
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2012
record_format invenio
spelling cern-14606862022-11-02T22:23:33Zhttp://cds.cern.ch/record/1460686engGovi, GiacomoHandling of time-critical Conditions Data in the CMS experiment - Experience of the first year of data takingComputing in High Energy and Nuclear Physics (CHEP) 2012Conferences<!--HTML-->Data management for a wide category of non-event data plays a critical role in the operation of the CMS experiment. The processing chain (data taking-reconstruction-analysis) relies in the prompt availability of specific, time dependent data describing the state of the various detectors and their calibration parameters, which are treated separately from event data. The Condition Database system is the infrastructure established to handle these data and to make sure that they are available to both offline and online workflows. The Condition Data layout is designed such that the payload data (the Condition) is associated to an Interval Of Validity (IOV). The IOV allows accessing selectively the sets corresponding to specific intervals of time, run number or luminosity section. Both payloads and IOVs are stored in a cluster of relational database servers (Oracle) using an object-relational access approach. The strict requirements of security and isolation of the CMS online systems are imposing a redundant architecture to the database system. The master database is located in the experiment area within the online network, while a read-only replica is kept in sync via Oracle streaming in the CERN computing center and this is the one which is accessible by worldwide computing jobs. The synchronization of the condition data is performed with specific jobs deployed within the online networks, and with dedicated “drop-box” services. We will discuss the overall architecture of the system, the implementation choices and the experience gained in the first year of operation.oai:cds.cern.ch:14606862012
spellingShingle Conferences
Govi, Giacomo
Handling of time-critical Conditions Data in the CMS experiment - Experience of the first year of data taking
title Handling of time-critical Conditions Data in the CMS experiment - Experience of the first year of data taking
title_full Handling of time-critical Conditions Data in the CMS experiment - Experience of the first year of data taking
title_fullStr Handling of time-critical Conditions Data in the CMS experiment - Experience of the first year of data taking
title_full_unstemmed Handling of time-critical Conditions Data in the CMS experiment - Experience of the first year of data taking
title_short Handling of time-critical Conditions Data in the CMS experiment - Experience of the first year of data taking
title_sort handling of time-critical conditions data in the cms experiment - experience of the first year of data taking
topic Conferences
url http://cds.cern.ch/record/1460686
work_keys_str_mv AT govigiacomo handlingoftimecriticalconditionsdatainthecmsexperimentexperienceofthefirstyearofdatataking
AT govigiacomo computinginhighenergyandnuclearphysicschep2012