Cargando…
LHCb computing model
This document is a first attempt to describe the LHCb computing model. The CPU power needed to process data for the event filter and reconstruction is estimated to be 2.2 \Theta 106 MIPS. This will be installed at the experiment and will be reused during non data-taking periods for reprocessing. The...
Autores principales: | , |
---|---|
Lenguaje: | eng |
Publicado: |
1998
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/691494 |
_version_ | 1780902002657591296 |
---|---|
author | Frank, M Pacheco, A |
author_facet | Frank, M Pacheco, A |
author_sort | Frank, M |
collection | CERN |
description | This document is a first attempt to describe the LHCb computing model. The CPU power needed to process data for the event filter and reconstruction is estimated to be 2.2 \Theta 106 MIPS. This will be installed at the experiment and will be reused during non data-taking periods for reprocessing. The maximal I/O of these activities is estimated to be around 40 MB/s.We have studied three basic models concerning the placement of the CPU resources for the other computing activities, Monte Carlo-simulation (1:4 \Theta 106 MIPS) and physics analysis (0:5 \Theta 106 MIPS): CPU resources may either be located at the physicist's homelab, national computer centres (Regional Centres) or at CERN.The CPU resources foreseen for analysis are sufficient to allow 100 concurrent analyses. It is assumed that physicists will work in physics groups that produce analysis data at an average rate of 4.2 MB/s or 11 TB per month. However, producing these group analysis data requires reading capabilities of 660 MB/s. It is further assumed that each user analysis will read 10 566130f this data twice a week giving a sustained bandwidth of , 360 MB/s.The large amount of analysis data to be distributed will: Require huge WAN bandwidth, if the data are distributed over the network. Require the ability to handle and cache the data where the CPU power is installed. It is unlikely that each institute could support access to the full dataset.Current understanding suggests a solution where data and CPU resources are located at few regional centres. Data for replication will be moved on tertiary media reducing WAN cost.1 |
id | cern-691494 |
institution | Organización Europea para la Investigación Nuclear |
language | eng |
publishDate | 1998 |
record_format | invenio |
spelling | cern-6914942019-09-30T06:29:59Zhttp://cds.cern.ch/record/691494engFrank, MPacheco, ALHCb computing modelDetectors and Experimental TechniquesThis document is a first attempt to describe the LHCb computing model. The CPU power needed to process data for the event filter and reconstruction is estimated to be 2.2 \Theta 106 MIPS. This will be installed at the experiment and will be reused during non data-taking periods for reprocessing. The maximal I/O of these activities is estimated to be around 40 MB/s.We have studied three basic models concerning the placement of the CPU resources for the other computing activities, Monte Carlo-simulation (1:4 \Theta 106 MIPS) and physics analysis (0:5 \Theta 106 MIPS): CPU resources may either be located at the physicist's homelab, national computer centres (Regional Centres) or at CERN.The CPU resources foreseen for analysis are sufficient to allow 100 concurrent analyses. It is assumed that physicists will work in physics groups that produce analysis data at an average rate of 4.2 MB/s or 11 TB per month. However, producing these group analysis data requires reading capabilities of 660 MB/s. It is further assumed that each user analysis will read 10 566130f this data twice a week giving a sustained bandwidth of , 360 MB/s.The large amount of analysis data to be distributed will: Require huge WAN bandwidth, if the data are distributed over the network. Require the ability to handle and cache the data where the CPU power is installed. It is unlikely that each institute could support access to the full dataset.Current understanding suggests a solution where data and CPU resources are located at few regional centres. Data for replication will be moved on tertiary media reducing WAN cost.1LHCb-98-046oai:cds.cern.ch:6914941998-03-11 |
spellingShingle | Detectors and Experimental Techniques Frank, M Pacheco, A LHCb computing model |
title | LHCb computing model |
title_full | LHCb computing model |
title_fullStr | LHCb computing model |
title_full_unstemmed | LHCb computing model |
title_short | LHCb computing model |
title_sort | lhcb computing model |
topic | Detectors and Experimental Techniques |
url | http://cds.cern.ch/record/691494 |
work_keys_str_mv | AT frankm lhcbcomputingmodel AT pachecoa lhcbcomputingmodel |