Cargando…
Investigating the Benefit of FP16-Enabled Mixed-Precision Solvers for Symmetric Positive Definite Matrices Using GPUs
Half-precision computation refers to performing floating-point operations in a 16-bit format. While half-precision has been driven largely by machine learning applications, recent algorithmic advances in numerical linear algebra have discovered beneficial use cases for half precision in accelerating...
Autores principales: | Abdelfattah, Ahmad, Tomov, Stan, Dongarra, Jack |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7302814/ http://dx.doi.org/10.1007/978-3-030-50417-5_18 |
Ejemplares similares
-
Mixed-precision iterative refinement using tensor cores on GPUs to accelerate solution of linear systems
por: Haidar, Azzam, et al.
Publicado: (2020) -
Multifrontal parallel distributed symmetric and unsymmetric solvers
por: Amestoy, P R, et al.
Publicado: (1998) -
mbend: an R package for bending non-positive-definite symmetric matrices to positive-definite
por: Nilforooshan, Mohammad Ali
Publicado: (2020) -
Approximate Joint Diagonalization and Geometric Mean of Symmetric Positive Definite Matrices
por: Congedo, Marco, et al.
Publicado: (2015) -
Controlled precision QUBO-based algorithm to compute eigenvectors of symmetric matrices
por: Krakoff, Benjamin, et al.
Publicado: (2022)