Cargando…

On Consensus-Optimality Trade-offs in Collaborative Deep Learning

In distributed machine learning, where agents collaboratively learn from diverse private data sets, there is a fundamental tension between consensus and optimality. In this paper, we build on recent algorithmic progresses in distributed deep learning to explore various consensus-optimality trade-off...

Descripción completa

Detalles Bibliográficos
Autores principales: Jiang, Zhanhong, Balu, Aditya, Hegde, Chinmay, Sarkar, Soumik
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8478077/
https://www.ncbi.nlm.nih.gov/pubmed/34595470
http://dx.doi.org/10.3389/frai.2021.573731
_version_ 1784575981255655424
author Jiang, Zhanhong
Balu, Aditya
Hegde, Chinmay
Sarkar, Soumik
author_facet Jiang, Zhanhong
Balu, Aditya
Hegde, Chinmay
Sarkar, Soumik
author_sort Jiang, Zhanhong
collection PubMed
description In distributed machine learning, where agents collaboratively learn from diverse private data sets, there is a fundamental tension between consensus and optimality. In this paper, we build on recent algorithmic progresses in distributed deep learning to explore various consensus-optimality trade-offs over a fixed communication topology. First, we propose the incremental consensus-based distributed stochastic gradient descent (i-CDSGD) algorithm, which involves multiple consensus steps (where each agent communicates information with its neighbors) within each SGD iteration. Second, we propose the generalized consensus-based distributed SGD (g-CDSGD) algorithm that enables us to navigate the full spectrum from complete consensus (all agents agree) to complete disagreement (each agent converges to individual model parameters). We analytically establish convergence of the proposed algorithms for strongly convex and nonconvex objective functions; we also analyze the momentum variants of the algorithms for the strongly convex case. We support our algorithms via numerical experiments, and demonstrate significant improvements over existing methods for collaborative deep learning.
format Online
Article
Text
id pubmed-8478077
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-84780772021-09-29 On Consensus-Optimality Trade-offs in Collaborative Deep Learning Jiang, Zhanhong Balu, Aditya Hegde, Chinmay Sarkar, Soumik Front Artif Intell Artificial Intelligence In distributed machine learning, where agents collaboratively learn from diverse private data sets, there is a fundamental tension between consensus and optimality. In this paper, we build on recent algorithmic progresses in distributed deep learning to explore various consensus-optimality trade-offs over a fixed communication topology. First, we propose the incremental consensus-based distributed stochastic gradient descent (i-CDSGD) algorithm, which involves multiple consensus steps (where each agent communicates information with its neighbors) within each SGD iteration. Second, we propose the generalized consensus-based distributed SGD (g-CDSGD) algorithm that enables us to navigate the full spectrum from complete consensus (all agents agree) to complete disagreement (each agent converges to individual model parameters). We analytically establish convergence of the proposed algorithms for strongly convex and nonconvex objective functions; we also analyze the momentum variants of the algorithms for the strongly convex case. We support our algorithms via numerical experiments, and demonstrate significant improvements over existing methods for collaborative deep learning. Frontiers Media S.A. 2021-09-14 /pmc/articles/PMC8478077/ /pubmed/34595470 http://dx.doi.org/10.3389/frai.2021.573731 Text en Copyright © 2021 Jiang, Balu, Hegde and Sarkar. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Jiang, Zhanhong
Balu, Aditya
Hegde, Chinmay
Sarkar, Soumik
On Consensus-Optimality Trade-offs in Collaborative Deep Learning
title On Consensus-Optimality Trade-offs in Collaborative Deep Learning
title_full On Consensus-Optimality Trade-offs in Collaborative Deep Learning
title_fullStr On Consensus-Optimality Trade-offs in Collaborative Deep Learning
title_full_unstemmed On Consensus-Optimality Trade-offs in Collaborative Deep Learning
title_short On Consensus-Optimality Trade-offs in Collaborative Deep Learning
title_sort on consensus-optimality trade-offs in collaborative deep learning
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8478077/
https://www.ncbi.nlm.nih.gov/pubmed/34595470
http://dx.doi.org/10.3389/frai.2021.573731
work_keys_str_mv AT jiangzhanhong onconsensusoptimalitytradeoffsincollaborativedeeplearning
AT baluaditya onconsensusoptimalitytradeoffsincollaborativedeeplearning
AT hegdechinmay onconsensusoptimalitytradeoffsincollaborativedeeplearning
AT sarkarsoumik onconsensusoptimalitytradeoffsincollaborativedeeplearning