Cargando…

Hybrid-DCA: A double asynchronous approach for stochastic dual coordinate ascent

In prior works, stochastic dual coordinate ascent (SDCA) has been parallelized in a multi-core environment where the cores communicate through shared memory, or in a multi-processor distributed memory environment where the processors communicate through message passing. In this paper, we propose a h...

Descripción completa

Detalles Bibliográficos
Autores principales: Pal, Soumitra, Xu, Tingyang, Yang, Tianbao, Rajasekaran, Sanguthevar, Bi, Jinbo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7375401/
https://www.ncbi.nlm.nih.gov/pubmed/32699464
http://dx.doi.org/10.1016/j.jpdc.2020.04.002
_version_ 1783561866098769920
author Pal, Soumitra
Xu, Tingyang
Yang, Tianbao
Rajasekaran, Sanguthevar
Bi, Jinbo
author_facet Pal, Soumitra
Xu, Tingyang
Yang, Tianbao
Rajasekaran, Sanguthevar
Bi, Jinbo
author_sort Pal, Soumitra
collection PubMed
description In prior works, stochastic dual coordinate ascent (SDCA) has been parallelized in a multi-core environment where the cores communicate through shared memory, or in a multi-processor distributed memory environment where the processors communicate through message passing. In this paper, we propose a hybrid SDCA framework for multi-core clusters, the most common high performance computing environment that consists of multiple nodes each having multiple cores and its own shared memory. We distribute data across nodes where each node solves a local problem in an asynchronous parallel fashion on its cores, and then the local updates are aggregated via an asynchronous across-node update scheme. The proposed double asynchronous method converges to a global solution for L-Lipschitz continuous loss functions, and at a linear convergence rate if a smooth convex loss function is used. Extensive empirical comparison has shown that our algorithm scales better than the best known shared-memory methods and runs faster than previous distributed-memory methods. Big datasets, such as one of 280 GB from the LIBSVM repository, cannot be accommodated on a single node and hence cannot be solved by a parallel algorithm. For such a dataset, our hybrid algorithm takes less than 30 seconds to achieve a duality gap of 10(−5) on 16 nodes each using 12 cores, which is significantly faster than the best known distributed algorithms, such as CoCoA+, that take more than 160 seconds on 16 nodes.
format Online
Article
Text
id pubmed-7375401
institution National Center for Biotechnology Information
language English
publishDate 2020
record_format MEDLINE/PubMed
spelling pubmed-73754012020-09-01 Hybrid-DCA: A double asynchronous approach for stochastic dual coordinate ascent Pal, Soumitra Xu, Tingyang Yang, Tianbao Rajasekaran, Sanguthevar Bi, Jinbo J Parallel Distrib Comput Article In prior works, stochastic dual coordinate ascent (SDCA) has been parallelized in a multi-core environment where the cores communicate through shared memory, or in a multi-processor distributed memory environment where the processors communicate through message passing. In this paper, we propose a hybrid SDCA framework for multi-core clusters, the most common high performance computing environment that consists of multiple nodes each having multiple cores and its own shared memory. We distribute data across nodes where each node solves a local problem in an asynchronous parallel fashion on its cores, and then the local updates are aggregated via an asynchronous across-node update scheme. The proposed double asynchronous method converges to a global solution for L-Lipschitz continuous loss functions, and at a linear convergence rate if a smooth convex loss function is used. Extensive empirical comparison has shown that our algorithm scales better than the best known shared-memory methods and runs faster than previous distributed-memory methods. Big datasets, such as one of 280 GB from the LIBSVM repository, cannot be accommodated on a single node and hence cannot be solved by a parallel algorithm. For such a dataset, our hybrid algorithm takes less than 30 seconds to achieve a duality gap of 10(−5) on 16 nodes each using 12 cores, which is significantly faster than the best known distributed algorithms, such as CoCoA+, that take more than 160 seconds on 16 nodes. 2020-04-13 2020-09 /pmc/articles/PMC7375401/ /pubmed/32699464 http://dx.doi.org/10.1016/j.jpdc.2020.04.002 Text en This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
spellingShingle Article
Pal, Soumitra
Xu, Tingyang
Yang, Tianbao
Rajasekaran, Sanguthevar
Bi, Jinbo
Hybrid-DCA: A double asynchronous approach for stochastic dual coordinate ascent
title Hybrid-DCA: A double asynchronous approach for stochastic dual coordinate ascent
title_full Hybrid-DCA: A double asynchronous approach for stochastic dual coordinate ascent
title_fullStr Hybrid-DCA: A double asynchronous approach for stochastic dual coordinate ascent
title_full_unstemmed Hybrid-DCA: A double asynchronous approach for stochastic dual coordinate ascent
title_short Hybrid-DCA: A double asynchronous approach for stochastic dual coordinate ascent
title_sort hybrid-dca: a double asynchronous approach for stochastic dual coordinate ascent
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7375401/
https://www.ncbi.nlm.nih.gov/pubmed/32699464
http://dx.doi.org/10.1016/j.jpdc.2020.04.002
work_keys_str_mv AT palsoumitra hybriddcaadoubleasynchronousapproachforstochasticdualcoordinateascent
AT xutingyang hybriddcaadoubleasynchronousapproachforstochasticdualcoordinateascent
AT yangtianbao hybriddcaadoubleasynchronousapproachforstochasticdualcoordinateascent
AT rajasekaransanguthevar hybriddcaadoubleasynchronousapproachforstochasticdualcoordinateascent
AT bijinbo hybriddcaadoubleasynchronousapproachforstochasticdualcoordinateascent