Cargando…
File management for HEP data grids
The next generation of high energy physics experiments, such as the Large Hadron Collider (LHC) at CERN, the European Organization for Nuclear Research, pose a challenge to current data handling methodologies, where data tends to be centralised in a single location. Data grids, including the LHC Com...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
Glasgow U.
2006
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/1351842 |
Sumario: | The next generation of high energy physics experiments, such as the Large Hadron Collider (LHC) at CERN, the European Organization for Nuclear Research, pose a challenge to current data handling methodologies, where data tends to be centralised in a single location. Data grids, including the LHC Computing Grid (LCG), are being developed to meet this challenge by unifying computing and storage resources from many sites worldwide and distributing data and computing tasks among them. This thesis describes the data management components of LCG and evaluates the performance of the LCG File Catalogue, showing it to be performant and scalable enough to meet the experiments’ needs. File replication can be used to improve a grid’s performance by placing copies of data at strategic locations around the grid. Dynamic file replication, where replicas are created and deleted automatically according to some strategy, may be especially useful and so the grid simulator OptorSim was developed to investigate different replication strategies. Simulation of several grid scenarios, including LCG, shows that relatively simple replication strategies can lead to significant reductions in data access times and improved usage of grid resources, while a more complex economic model may be useful in future. |
---|