Cargando…

Straggler-Aware Distributed Learning: Communication–Computation Latency Trade-Off

When gradient descent (GD) is scaled to many parallel workers for large-scale machine learning applications, its per-iteration computation time is limited by straggling workers. Straggling workers can be tolerated by assigning redundant computations and/or coding across data and computations, but in...

Descripción completa

Detalles Bibliográficos
Autores principales: Ozfatura, Emre, Ulukus, Sennur, Gündüz, Deniz
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7517046/
https://www.ncbi.nlm.nih.gov/pubmed/33286316
http://dx.doi.org/10.3390/e22050544