Cargando…

Converging Storage Layers with Virtual CephFS Drives for EOS/CERNBox

<!--HTML-->The CERNBox service is currently backed by 13PB of EOS storage distributed across more than 3,000 drives. EOS has proven to be a reliable and highly performing backend throughout. On the other hand, the CERN Storage Group also operates CephFS, which has been previously evaluated in...

Descripción completa

Detalles Bibliográficos
Autor principal: Valverde Cameselle, Roberto
Lenguaje:eng
Publicado: 2022
Materias:
Acceso en línea:http://cds.cern.ch/record/2801686
_version_ 1780972715940773888
author Valverde Cameselle, Roberto
author_facet Valverde Cameselle, Roberto
author_sort Valverde Cameselle, Roberto
collection CERN
description <!--HTML-->The CERNBox service is currently backed by 13PB of EOS storage distributed across more than 3,000 drives. EOS has proven to be a reliable and highly performing backend throughout. On the other hand, the CERN Storage Group also operates CephFS, which has been previously evaluated in combination with EOS as a potential solution for large scale physics data taking [1]. This work seeks to further explore the operational benefits of a combined EOS/CephFS solution as a CERNbox backend. First, we present the functional validation work done using a canary instance and existing micro benchmarks. Next, we show how the solution was gradually introduced to production, observing the relative impacts of metadata and backend storage on user perceived small op performance. Finally, the qualitative impact of the solution is discussed: potential for enhanced QoS (e.g. policy driven low latency vs low-cost areas), simplication of hardware operations across the entire lifecycle, and how the work may enable future cloud-based deployments. [1] https://doi.org/10.1007/s41781-021-00071-1
id cern-2801686
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2022
record_format invenio
spelling cern-28016862022-11-02T22:04:03Zhttp://cds.cern.ch/record/2801686engValverde Cameselle, RobertoConverging Storage Layers with Virtual CephFS Drives for EOS/CERNBoxCS3 2022 - Cloud Storage Synchronization and SharingHEP Computing<!--HTML-->The CERNBox service is currently backed by 13PB of EOS storage distributed across more than 3,000 drives. EOS has proven to be a reliable and highly performing backend throughout. On the other hand, the CERN Storage Group also operates CephFS, which has been previously evaluated in combination with EOS as a potential solution for large scale physics data taking [1]. This work seeks to further explore the operational benefits of a combined EOS/CephFS solution as a CERNbox backend. First, we present the functional validation work done using a canary instance and existing micro benchmarks. Next, we show how the solution was gradually introduced to production, observing the relative impacts of metadata and backend storage on user perceived small op performance. Finally, the qualitative impact of the solution is discussed: potential for enhanced QoS (e.g. policy driven low latency vs low-cost areas), simplication of hardware operations across the entire lifecycle, and how the work may enable future cloud-based deployments. [1] https://doi.org/10.1007/s41781-021-00071-1oai:cds.cern.ch:28016862022
spellingShingle HEP Computing
Valverde Cameselle, Roberto
Converging Storage Layers with Virtual CephFS Drives for EOS/CERNBox
title Converging Storage Layers with Virtual CephFS Drives for EOS/CERNBox
title_full Converging Storage Layers with Virtual CephFS Drives for EOS/CERNBox
title_fullStr Converging Storage Layers with Virtual CephFS Drives for EOS/CERNBox
title_full_unstemmed Converging Storage Layers with Virtual CephFS Drives for EOS/CERNBox
title_short Converging Storage Layers with Virtual CephFS Drives for EOS/CERNBox
title_sort converging storage layers with virtual cephfs drives for eos/cernbox
topic HEP Computing
url http://cds.cern.ch/record/2801686
work_keys_str_mv AT valverdecameselleroberto convergingstoragelayerswithvirtualcephfsdrivesforeoscernbox
AT valverdecameselleroberto cs32022cloudstoragesynchronizationandsharing