Cargando…

CASTOR status and evolution

In January 1999, CERN began to develop CASTOR ("CERN Advanced STORage manager"). This Hierarchical Storage Manager targetted at HEP applications has been in full production at CERN since May 2001. It now contains more than two Petabyte of data in roughly 9 million files. In 2002, 350 Terab...

Descripción completa

Detalles Bibliográficos
Autores principales: Baud, Jean-Philippe, Couturier, Ben, Curran, Charles, Durand, Jean-Damien, Knezo, Emil, Occhetti, Stefano, Barring, Olof
Lenguaje:eng
Publicado: 2003
Materias:
Acceso en línea:http://cds.cern.ch/record/618640
_version_ 1780900326624198656
author Baud, Jean-Philippe
Couturier, Ben
Curran, Charles
Durand, Jean-Damien
Knezo, Emil
Occhetti, Stefano
Barring, Olof
author_facet Baud, Jean-Philippe
Couturier, Ben
Curran, Charles
Durand, Jean-Damien
Knezo, Emil
Occhetti, Stefano
Barring, Olof
author_sort Baud, Jean-Philippe
collection CERN
description In January 1999, CERN began to develop CASTOR ("CERN Advanced STORage manager"). This Hierarchical Storage Manager targetted at HEP applications has been in full production at CERN since May 2001. It now contains more than two Petabyte of data in roughly 9 million files. In 2002, 350 Terabytes of data were stored for COMPASS at 45 MB/s and a Data Challenge was run for ALICE in preparation for the LHC startup in 2007 and sustained a data transfer to tape of 300 MB/s for one week (180 TB). The major functionality improvements were the support for files larger than 2 GB (in collaboration with IN2P3) and the development of Grid interfaces to CASTOR: GridFTP and SRM ("Storage Resource Manager"). An ongoing effort is taking place to copy the existing data from obsolete media like 9940 A to better cost effective offerings. CASTOR has also been deployed at several HEP sites with little effort. In 2003, we plan to continue working on Grid interfaces and to improve performance not only for Central Data Recording but also for Data Analysis applications where thousands of processes possibly access the same hot data. This could imply the selection of another filesystem or the use of replication (hardware or software).
id cern-618640
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2003
record_format invenio
spelling cern-6186402023-03-15T19:11:00Zhttp://cds.cern.ch/record/618640engBaud, Jean-PhilippeCouturier, BenCurran, CharlesDurand, Jean-DamienKnezo, EmilOcchetti, StefanoBarring, OlofCASTOR status and evolutionComputing and ComputersIn January 1999, CERN began to develop CASTOR ("CERN Advanced STORage manager"). This Hierarchical Storage Manager targetted at HEP applications has been in full production at CERN since May 2001. It now contains more than two Petabyte of data in roughly 9 million files. In 2002, 350 Terabytes of data were stored for COMPASS at 45 MB/s and a Data Challenge was run for ALICE in preparation for the LHC startup in 2007 and sustained a data transfer to tape of 300 MB/s for one week (180 TB). The major functionality improvements were the support for files larger than 2 GB (in collaboration with IN2P3) and the development of Grid interfaces to CASTOR: GridFTP and SRM ("Storage Resource Manager"). An ongoing effort is taking place to copy the existing data from obsolete media like 9940 A to better cost effective offerings. CASTOR has also been deployed at several HEP sites with little effort. In 2003, we plan to continue working on Grid interfaces and to improve performance not only for Central Data Recording but also for Data Analysis applications where thousands of processes possibly access the same hot data. This could imply the selection of another filesystem or the use of replication (hardware or software).In January 1999, CERN began to develop CASTOR (CERN Advanced STORage manager). This Hierarchical Storage Manager targetted at HEP applications has been in full production at CERN since May 2001. It now contains more than two Petabyte of data in roughly 9 million files. In 2002, 350 Terabytes of data were stored for COMPASS at 45 MB/s and a Data Challenge was run for ALICE in preparation for the LHC startup in 2007 and sustained a data transfer to tape of 300 MB/s for one week (180 TB). The major functionality improvements were the support for files larger than 2 GB (in collaboration with IN2P3) and the development of Grid interfaces to CASTOR: GridFTP and SRM (Storage Resource Manager). An ongoing effort is taking place to copy the existing data from obsolete media like 9940 A to better cost effective offerings. CASTOR has also been deployed at several HEP sites with little effort. In 2003, we plan to continue working on Grid interfaces and to improve performance not only for Central Data Recording but also for Data Analysis applications where thousands of processes possibly access the same hot data. This could imply the selection of another filesystem or the use of replication (hardware or software).CHEP-2003-TUDT007cs/0305047oai:cds.cern.ch:6186402003-05-28
spellingShingle Computing and Computers
Baud, Jean-Philippe
Couturier, Ben
Curran, Charles
Durand, Jean-Damien
Knezo, Emil
Occhetti, Stefano
Barring, Olof
CASTOR status and evolution
title CASTOR status and evolution
title_full CASTOR status and evolution
title_fullStr CASTOR status and evolution
title_full_unstemmed CASTOR status and evolution
title_short CASTOR status and evolution
title_sort castor status and evolution
topic Computing and Computers
url http://cds.cern.ch/record/618640
work_keys_str_mv AT baudjeanphilippe castorstatusandevolution
AT couturierben castorstatusandevolution
AT currancharles castorstatusandevolution
AT durandjeandamien castorstatusandevolution
AT knezoemil castorstatusandevolution
AT occhettistefano castorstatusandevolution
AT barringolof castorstatusandevolution