Cargando…

ATLAS Data Carousel

The ATLAS experiment at CERN’s Large Hadron Collider (LHC) stores detector and simulation data in raw and derived data formats across more than 150 Grid sites world-wide. Currently in total about 200PB on disk and 250PB on tape. Data have different access characteristics due to various computational...

Descripción completa

Detalles Bibliográficos
Autores principales: Barisits, Martin-Stefan, Borodin, Mikhail, Di Girolamo, Alessandro, Elmsheuser, Johannes, Golubkov, Dmitry, Klimentov, Alexei, Lassnig, Mario, Maeno, Tadashi, Walker, Rodney, Zhao, Xin
Lenguaje:eng
Publicado: 2020
Materias:
Acceso en línea:https://dx.doi.org/10.1051/epjconf/202024504035
http://cds.cern.ch/record/2709950
Descripción
Sumario:The ATLAS experiment at CERN’s Large Hadron Collider (LHC) stores detector and simulation data in raw and derived data formats across more than 150 Grid sites world-wide. Currently in total about 200PB on disk and 250PB on tape. Data have different access characteristics due to various computational workflows, and can be accessed from different mediums, such as remote I/O, disk cache on hard disk drives or SSDs, also larger data centers provide the majority of offline storage capability via tape systems. For the next LHC phase, High Luminosity LHC (HL-LHC), the estimated data storage requirements are several factors bigger than the present forecast of available resources, based on a flat budget assumption. On the computing side, ATLAS Distributed Computing (ADC) was very successful in the last years with HPC and HTC integration and using opportunistic computing resources for the Monte-Carlo production. On the other hand, equivalent opportunistic storage does not exist for HEP experiments. ADC started the “Data Carousel” project to increase the usage of less expensive storage, i.e. tapes or even commercial storage, so it is not limited to tape technologies exclusively. Data Carousel orchestrates data processing between workload management, data management, and storage services with the bulk data resident on offline storage. The processing is executed by staging and promptly processing a sliding window of inputs onto faster buffer storage, such that only a small percentage of input data are available at any one time. With this project, we aim to demonstrate that this is the natural way to dramatically reduce our storage cost. The first phase of the project was started in the fall of 2018 and was related to I/O tests of the sites archiving systems. Now we are at Phase II, which requires a tight integration of the workload and data management systems. Additionally, the Data Carousel will study the feasibility to run multiple computing workflows from tape. The project is progressing very well and the results will be presented in this paper and used before LHC Run 3.