Cargando…
Performance and profiling data of plane-wave calculations in quantum ESPRESSO simulation on three supercomputing centres
This dataset reflects the parallel execution profiles of five Quantum ESPRESSO simulation (QE) versions in finding the total energy of the Cerium Oxide lattice using the self-consistent field (SCF) method. The data analysis used a strong scale setting to identify the optimal parameters and computing...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10562675/ https://www.ncbi.nlm.nih.gov/pubmed/37823065 http://dx.doi.org/10.1016/j.dib.2023.109614 |
Sumario: | This dataset reflects the parallel execution profiles of five Quantum ESPRESSO simulation (QE) versions in finding the total energy of the Cerium Oxide lattice using the self-consistent field (SCF) method. The data analysis used a strong scale setting to identify the optimal parameters and computing resources needed to complete a single SCF loop for one specific material efficiently. This analysis notably contributed to achieving the Best Performance Award at the 5th APAC HPC-AI Competition. The data comprises three sets. The first set features the parallel execution traces captured via the Extrae performance profiling tool, offering a broad view of the QE's model execution behaviour and how it used computational resources. The second set records how long QE's model ran on a single node at three HPC centres: ThaiSC TARA in Thailand, NSCC ASPIRE-1 in Singapore, and NCI Gadi in Australia. This set focuses on the impact of adjusting three parameters for K-point parallelisation. The final set presents benchmarking data generated by scaling out the QE's model across 32 nodes (1,536 CPU cores) on the NCI Gadi supercomputer. Despite its focus on a single material, the dataset serves as a roadmap for researchers to estimate required computational resources and understand scalability bottlenecks, offering general guidelines adaptable across different HPC systems. |
---|