Cargando…
EOS and Ceph integration with Kubernetes
<!--HTML-->Due to the increasing interest on data management services capable to cope with very large data resources, allowing the future e-infrastructures to address the needs of the next generation extreme scale scientific experiments, the national center of INFN (Italian Institute for Nucle...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2022
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/2803558 |
Sumario: | <!--HTML-->Due to the increasing interest on data management services capable to cope with very large data resources, allowing the future e-infrastructures to address the needs of the next generation extreme scale scientific experiments, the national center of INFN (Italian Institute for Nuclear Physics) dedicated to Research and Development on Information and Communication Technologies (CNAF) and the Conseil Européen pour la Recherche Nucléaire (CERN) joined their experiences on storage systems to evaluate and test different technologies for next-generation storage challenges.
The activity focused on the integration, using Kubernetes as orchestrator, of different storage systems (EOS and Ceph) with the aim to combine the high level scalability and stability of EOS services with the reliability and redundancy features provided by Ceph.
In particular, EOS services have been deployed as containers and orchestrated by Kubernetes, the well-known open-source container-orchestration system for automating computer application deployment, scaling and management.
The activity leverages in the possibility to integrate the two storage solutions by deploying them as containers and orchestrated by Kubernetes. In this respect, Kubernetes has been adopted to test different cluster-deployment scenarios (both on cloud and bare-metal) and assess their performances, bringing important improvements in terms of system operations, management and scalability.
The results obtained by measuring the performances of the different combined technologies, comparing for instance block device and file system as backend options provided by a Ceph cluster deployed on physical machines, will be shown and discussed. |
---|