Cargando…
Straggler-Aware Distributed Learning: Communication–Computation Latency Trade-Off
When gradient descent (GD) is scaled to many parallel workers for large-scale machine learning applications, its per-iteration computation time is limited by straggling workers. Straggling workers can be tolerated by assigning redundant computations and/or coding across data and computations, but in...
Autores principales: | Ozfatura, Emre, Ulukus, Sennur, Gündüz, Deniz |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7517046/ https://www.ncbi.nlm.nih.gov/pubmed/33286316 http://dx.doi.org/10.3390/e22050544 |
Ejemplares similares
-
Stragglers on the March
Publicado: (1886) -
Ecology of blue straggler stars
por: Boffin, Henri, et al.
Publicado: (2015) -
Adversarial Machine Learning for NextG Covert Communications Using Multiple Antennas
por: Kim, Brian, et al.
Publicado: (2022) -
Straggler- and Adversary-Tolerant Secure Distributed Matrix Multiplication Using Polynomial Codes
por: Byrne, Eimear, et al.
Publicado: (2023) -
Distributed Hypothesis Testing over a Noisy Channel: Error-Exponents Trade-Off
por: Sreekumar, Sreejith, et al.
Publicado: (2023)