Cargando…

An I/O Analysis of HPC Workloads on CephFS and Lustre

In this contribution we compare the performance of the Input/Output load (I/O) of a High-Performance Computing (HPC) application on two different File Systems: CephFS and Lustre; our goal is to assess whether CephFS could be considered a valid choice for intense HPC applications. We perform our anal...

Descripción completa

Detalles Bibliográficos
Autores principales: Chiusole, Alberto, Cozzini, Stefano, van der Ster, Daniel, Lamanna, Massimo, Giuliani, Graziano
Lenguaje:eng
Publicado: 2019
Materias:
Acceso en línea:https://dx.doi.org/10.1007/978-3-030-34356-9_24
http://cds.cern.ch/record/2806103
Descripción
Sumario:In this contribution we compare the performance of the Input/Output load (I/O) of a High-Performance Computing (HPC) application on two different File Systems: CephFS and Lustre; our goal is to assess whether CephFS could be considered a valid choice for intense HPC applications. We perform our analysis using a real HPC workload, namely RegCM, a climate simulation application, and IOR, a synthetic benchmark application, to simulate several I/O patterns using different I/O parallel libraries (MPI-IO, HDF5, PnetCDF). We compare writing performance for the two different I/O approaches that RegCM implements: the so-called spokesperson or serial, and a truly parallel one. The small difference registered between the serial I/O approach and the parallel one motivates us to explore in detail how the software stack interacts with the underlying File Systems. For this reason, we use IOR and MPIIO hints related to Collective Buffering and Data Sieving to analyze several I/O patterns on the two different File Systems. Finally we investigate Lazy I/O, a unique feature of CephFS, which disables file coherency locks introduced by the File System; this allows Ceph to buffer writes and to fully exploit its parallel and distributed architecture. Two clusters were set up for these benchmarks, one at CNRIOM and a second one at Pawsey Supercomputing Centre; we performed similar tests on both installations, and we recorded a four-times I/O performance improvement with Lazy I/O enabled. Preliminary results collected so far are quite promising and further actions and new possible I/O optimizations are presented and discussed.