Cargando…

Investigating the Benefit of FP16-Enabled Mixed-Precision Solvers for Symmetric Positive Definite Matrices Using GPUs

Half-precision computation refers to performing floating-point operations in a 16-bit format. While half-precision has been driven largely by machine learning applications, recent algorithmic advances in numerical linear algebra have discovered beneficial use cases for half precision in accelerating...

Descripción completa

Detalles Bibliográficos
Autores principales: Abdelfattah, Ahmad, Tomov, Stan, Dongarra, Jack
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7302814/
http://dx.doi.org/10.1007/978-3-030-50417-5_18
_version_ 1783547926941794304
author Abdelfattah, Ahmad
Tomov, Stan
Dongarra, Jack
author_facet Abdelfattah, Ahmad
Tomov, Stan
Dongarra, Jack
author_sort Abdelfattah, Ahmad
collection PubMed
description Half-precision computation refers to performing floating-point operations in a 16-bit format. While half-precision has been driven largely by machine learning applications, recent algorithmic advances in numerical linear algebra have discovered beneficial use cases for half precision in accelerating the solution of linear systems of equations at higher precisions. In this paper, we present a high-performance, mixed-precision linear solver ([Formula: see text]) for symmetric positive definite systems in double-precision using graphics processing units (GPUs). The solver is based on a mixed-precision Cholesky factorization that utilizes the high-performance tensor core units in CUDA-enabled GPUs. Since the Cholesky factors are affected by the low precision, an iterative refinement (IR) solver is required to recover the solution back to double-precision accuracy. Two different types of IR solvers are discussed on a wide range of test matrices. A preprocessing step is also developed, which scales and shifts the matrix, if necessary, in order to preserve its positive-definiteness in lower precisions. Our experiments on the V100 GPU show that performance speedups are up to 4.7[Formula: see text] against a direct double-precision solver. However, matrix properties such as the condition number and the eigenvalue distribution can affect the convergence rate, which would consequently affect the overall performance.
format Online
Article
Text
id pubmed-7302814
institution National Center for Biotechnology Information
language English
publishDate 2020
record_format MEDLINE/PubMed
spelling pubmed-73028142020-06-19 Investigating the Benefit of FP16-Enabled Mixed-Precision Solvers for Symmetric Positive Definite Matrices Using GPUs Abdelfattah, Ahmad Tomov, Stan Dongarra, Jack Computational Science – ICCS 2020 Article Half-precision computation refers to performing floating-point operations in a 16-bit format. While half-precision has been driven largely by machine learning applications, recent algorithmic advances in numerical linear algebra have discovered beneficial use cases for half precision in accelerating the solution of linear systems of equations at higher precisions. In this paper, we present a high-performance, mixed-precision linear solver ([Formula: see text]) for symmetric positive definite systems in double-precision using graphics processing units (GPUs). The solver is based on a mixed-precision Cholesky factorization that utilizes the high-performance tensor core units in CUDA-enabled GPUs. Since the Cholesky factors are affected by the low precision, an iterative refinement (IR) solver is required to recover the solution back to double-precision accuracy. Two different types of IR solvers are discussed on a wide range of test matrices. A preprocessing step is also developed, which scales and shifts the matrix, if necessary, in order to preserve its positive-definiteness in lower precisions. Our experiments on the V100 GPU show that performance speedups are up to 4.7[Formula: see text] against a direct double-precision solver. However, matrix properties such as the condition number and the eigenvalue distribution can affect the convergence rate, which would consequently affect the overall performance. 2020-06-15 /pmc/articles/PMC7302814/ http://dx.doi.org/10.1007/978-3-030-50417-5_18 Text en © Springer Nature Switzerland AG 2020 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Abdelfattah, Ahmad
Tomov, Stan
Dongarra, Jack
Investigating the Benefit of FP16-Enabled Mixed-Precision Solvers for Symmetric Positive Definite Matrices Using GPUs
title Investigating the Benefit of FP16-Enabled Mixed-Precision Solvers for Symmetric Positive Definite Matrices Using GPUs
title_full Investigating the Benefit of FP16-Enabled Mixed-Precision Solvers for Symmetric Positive Definite Matrices Using GPUs
title_fullStr Investigating the Benefit of FP16-Enabled Mixed-Precision Solvers for Symmetric Positive Definite Matrices Using GPUs
title_full_unstemmed Investigating the Benefit of FP16-Enabled Mixed-Precision Solvers for Symmetric Positive Definite Matrices Using GPUs
title_short Investigating the Benefit of FP16-Enabled Mixed-Precision Solvers for Symmetric Positive Definite Matrices Using GPUs
title_sort investigating the benefit of fp16-enabled mixed-precision solvers for symmetric positive definite matrices using gpus
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7302814/
http://dx.doi.org/10.1007/978-3-030-50417-5_18
work_keys_str_mv AT abdelfattahahmad investigatingthebenefitoffp16enabledmixedprecisionsolversforsymmetricpositivedefinitematricesusinggpus
AT tomovstan investigatingthebenefitoffp16enabledmixedprecisionsolversforsymmetricpositivedefinitematricesusinggpus
AT dongarrajack investigatingthebenefitoffp16enabledmixedprecisionsolversforsymmetricpositivedefinitematricesusinggpus