Cargando…
Mixed-precision iterative refinement using tensor cores on GPUs to accelerate solution of linear systems
Double-precision floating-point arithmetic (FP64) has been the de facto standard for engineering and scientific simulations for several decades. Problem complexity and the sheer volume of data coming from various instruments and sensors motivate researchers to mix and match various approaches to opt...
Autores principales: | Haidar, Azzam, Bayraktar, Harun, Tomov, Stanimire, Dongarra, Jack, Higham, Nicholas J. |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
The Royal Society Publishing
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7735315/ https://www.ncbi.nlm.nih.gov/pubmed/33363437 http://dx.doi.org/10.1098/rspa.2020.0110 |
Ejemplares similares
-
heFFTe: Highly Efficient FFT for Exascale
por: Ayala, Alan, et al.
Publicado: (2020) -
Investigating the Benefit of FP16-Enabled Mixed-Precision Solvers for Symmetric Positive Definite Matrices Using GPUs
por: Abdelfattah, Ahmad, et al.
Publicado: (2020) -
Accelerating Madgraph with CPU vectorization and GPUs
por: Valassi, Andrea
Publicado: (2023) -
Accelerating AutoDock Vina with GPUs
por: Tang, Shidi, et al.
Publicado: (2022) -
Acceleration of Approximate Matrix Multiplications on GPUs
por: Okuyama, Takuya, et al.
Publicado: (2023)