Cargando…

Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions

We consider the standard model of distributed optimization of a sum of functions [Formula: see text] , where node i in a network holds the function f(i)(z). We allow for a harsh network model characterized by asynchronous updates, message delays, unpredictable message losses, and directed communicat...

Descripción completa

Detalles Bibliográficos
Autores principales: Spiridonoff, Artin, Olshevsky, Alex, Paschalidis, Ioannis Ch.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7520166/
https://www.ncbi.nlm.nih.gov/pubmed/32989377
Descripción
Sumario:We consider the standard model of distributed optimization of a sum of functions [Formula: see text] , where node i in a network holds the function f(i)(z). We allow for a harsh network model characterized by asynchronous updates, message delays, unpredictable message losses, and directed communication among nodes. In this setting, we analyze a modification of the Gradient-Push method for distributed optimization, assuming that (i) node i is capable of generating gradients of its function f(i)(z) corrupted by zero-mean bounded–support additive noise at each step, (ii) F(z) is strongly convex, and (iii) each f(i)(z) has Lipschitz gradients. We show that our proposed method asymptotically performs as well as the best bounds on centralized gradient descent that takes steps in the direction of the sum of the noisy gradients of all the functions f(1)(z), …, f(n)(z) at each step.