Cargando…
FedDNA: Federated learning using dynamic node alignment
Federated Learning (FL), as a new computing framework, has received significant attentions recently due to its advantageous in preserving data privacy in training models with superb performance. During FL learning, distributed sites first learn respective parameters. A central site will consolidate...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10317242/ https://www.ncbi.nlm.nih.gov/pubmed/37399217 http://dx.doi.org/10.1371/journal.pone.0288157 |
_version_ | 1785067863194730496 |
---|---|
author | Wang, Shuwen Zhu, Xingquan |
author_facet | Wang, Shuwen Zhu, Xingquan |
author_sort | Wang, Shuwen |
collection | PubMed |
description | Federated Learning (FL), as a new computing framework, has received significant attentions recently due to its advantageous in preserving data privacy in training models with superb performance. During FL learning, distributed sites first learn respective parameters. A central site will consolidate learned parameters, using average or other approaches, and disseminate new weights across all sites to carryout next round of learning. The distributed parameter learning and consolidation repeat in an iterative fashion until the algorithm converges or terminates. Many FL methods exist to aggregate weights from distributed sites, but most approaches use a static node alignment approach, where nodes of distributed networks are statically assigned, in advance, to match nodes and aggregate their weights. In reality, neural networks, especially dense networks, have nontransparent roles with respect to individual nodes. Combined with random nature of the networks, static node matching often does not result in best matching between nodes across sites. In this paper, we propose, FedDNA, a dynamic node alignment federated learning algorithm. Our theme is to find best matching nodes between different sites, and then aggregate weights of matching nodes for federated learning. For each node in a neural network, we represent its weight values as a vector, and use a distance function to find most similar nodes, i.e., nodes with the smallest distance from other sides. Because finding best matching across all sites are computationally expensive, we further design a minimum spanning tree based approach to ensure that a node from each site will have matched peers from other sites, such that the total pairwise distances across all sites are minimized. Experiments and comparisons demonstrate that FedDNA outperforms commonly used baseline, such as FedAvg, for federated learning. |
format | Online Article Text |
id | pubmed-10317242 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-103172422023-07-04 FedDNA: Federated learning using dynamic node alignment Wang, Shuwen Zhu, Xingquan PLoS One Research Article Federated Learning (FL), as a new computing framework, has received significant attentions recently due to its advantageous in preserving data privacy in training models with superb performance. During FL learning, distributed sites first learn respective parameters. A central site will consolidate learned parameters, using average or other approaches, and disseminate new weights across all sites to carryout next round of learning. The distributed parameter learning and consolidation repeat in an iterative fashion until the algorithm converges or terminates. Many FL methods exist to aggregate weights from distributed sites, but most approaches use a static node alignment approach, where nodes of distributed networks are statically assigned, in advance, to match nodes and aggregate their weights. In reality, neural networks, especially dense networks, have nontransparent roles with respect to individual nodes. Combined with random nature of the networks, static node matching often does not result in best matching between nodes across sites. In this paper, we propose, FedDNA, a dynamic node alignment federated learning algorithm. Our theme is to find best matching nodes between different sites, and then aggregate weights of matching nodes for federated learning. For each node in a neural network, we represent its weight values as a vector, and use a distance function to find most similar nodes, i.e., nodes with the smallest distance from other sides. Because finding best matching across all sites are computationally expensive, we further design a minimum spanning tree based approach to ensure that a node from each site will have matched peers from other sites, such that the total pairwise distances across all sites are minimized. Experiments and comparisons demonstrate that FedDNA outperforms commonly used baseline, such as FedAvg, for federated learning. Public Library of Science 2023-07-03 /pmc/articles/PMC10317242/ /pubmed/37399217 http://dx.doi.org/10.1371/journal.pone.0288157 Text en © 2023 Wang, Zhu https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Wang, Shuwen Zhu, Xingquan FedDNA: Federated learning using dynamic node alignment |
title | FedDNA: Federated learning using dynamic node alignment |
title_full | FedDNA: Federated learning using dynamic node alignment |
title_fullStr | FedDNA: Federated learning using dynamic node alignment |
title_full_unstemmed | FedDNA: Federated learning using dynamic node alignment |
title_short | FedDNA: Federated learning using dynamic node alignment |
title_sort | feddna: federated learning using dynamic node alignment |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10317242/ https://www.ncbi.nlm.nih.gov/pubmed/37399217 http://dx.doi.org/10.1371/journal.pone.0288157 |
work_keys_str_mv | AT wangshuwen feddnafederatedlearningusingdynamicnodealignment AT zhuxingquan feddnafederatedlearningusingdynamicnodealignment |