Cargando…
Federated Learning Optimization Algorithm for Automatic Weight Optimal
Federated learning (FL), a distributed machine-learning framework, is poised to effectively protect data privacy and security, and it also has been widely applied in variety of fields in recent years. However, the system heterogeneity and statistical heterogeneity of FL pose serious obstacles to the...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9668465/ https://www.ncbi.nlm.nih.gov/pubmed/36407688 http://dx.doi.org/10.1155/2022/8342638 |
_version_ | 1784831919405400064 |
---|---|
author | Yu, Xi Li, Li He, Xin Chen, Shengbo Jiang, Lei |
author_facet | Yu, Xi Li, Li He, Xin Chen, Shengbo Jiang, Lei |
author_sort | Yu, Xi |
collection | PubMed |
description | Federated learning (FL), a distributed machine-learning framework, is poised to effectively protect data privacy and security, and it also has been widely applied in variety of fields in recent years. However, the system heterogeneity and statistical heterogeneity of FL pose serious obstacles to the global model's quality. This study investigates server and client resource allocation in the context of FL system resource efficiency and offers the FedAwo optimization algorithm. This approach combines adaptive learning with federated learning, and makes full use of the computing resources of the server to calculate the optimal weight value corresponding to each client. This approach aggregated the global model according to the optimal weight value, which significantly minimizes the detrimental effects of statistical and system heterogeneity. In the process of traditional FL, we found that a large number of client trainings converge earlier than the specified epoch. However, according to the provisions of traditional FL, the client still needs to be trained for the specified epoch, which leads to the meaningless of a large number of calculations in the client. To further lower the training cost, the augmentation FedAwo (∗) algorithm is proposed. The FedAwo (∗) algorithm takes into account the heterogeneity of clients and sets the criteria for local convergence. When the local model of the client reaches the criteria, it will be returned to the server immediately. In this way, the epoch of the client can dynamically be modified adaptively. A large number of experiments based on MNIST and Fashion-MNIST public datasets reveal that the global model converges faster and has higher accuracy in FedAwo and FedAwo (∗) algorithms than FedAvg, FedProx, and FedAdp baseline algorithms. |
format | Online Article Text |
id | pubmed-9668465 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Hindawi |
record_format | MEDLINE/PubMed |
spelling | pubmed-96684652022-11-17 Federated Learning Optimization Algorithm for Automatic Weight Optimal Yu, Xi Li, Li He, Xin Chen, Shengbo Jiang, Lei Comput Intell Neurosci Research Article Federated learning (FL), a distributed machine-learning framework, is poised to effectively protect data privacy and security, and it also has been widely applied in variety of fields in recent years. However, the system heterogeneity and statistical heterogeneity of FL pose serious obstacles to the global model's quality. This study investigates server and client resource allocation in the context of FL system resource efficiency and offers the FedAwo optimization algorithm. This approach combines adaptive learning with federated learning, and makes full use of the computing resources of the server to calculate the optimal weight value corresponding to each client. This approach aggregated the global model according to the optimal weight value, which significantly minimizes the detrimental effects of statistical and system heterogeneity. In the process of traditional FL, we found that a large number of client trainings converge earlier than the specified epoch. However, according to the provisions of traditional FL, the client still needs to be trained for the specified epoch, which leads to the meaningless of a large number of calculations in the client. To further lower the training cost, the augmentation FedAwo (∗) algorithm is proposed. The FedAwo (∗) algorithm takes into account the heterogeneity of clients and sets the criteria for local convergence. When the local model of the client reaches the criteria, it will be returned to the server immediately. In this way, the epoch of the client can dynamically be modified adaptively. A large number of experiments based on MNIST and Fashion-MNIST public datasets reveal that the global model converges faster and has higher accuracy in FedAwo and FedAwo (∗) algorithms than FedAvg, FedProx, and FedAdp baseline algorithms. Hindawi 2022-11-09 /pmc/articles/PMC9668465/ /pubmed/36407688 http://dx.doi.org/10.1155/2022/8342638 Text en Copyright © 2022 Xi Yu et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Article Yu, Xi Li, Li He, Xin Chen, Shengbo Jiang, Lei Federated Learning Optimization Algorithm for Automatic Weight Optimal |
title | Federated Learning Optimization Algorithm for Automatic Weight Optimal |
title_full | Federated Learning Optimization Algorithm for Automatic Weight Optimal |
title_fullStr | Federated Learning Optimization Algorithm for Automatic Weight Optimal |
title_full_unstemmed | Federated Learning Optimization Algorithm for Automatic Weight Optimal |
title_short | Federated Learning Optimization Algorithm for Automatic Weight Optimal |
title_sort | federated learning optimization algorithm for automatic weight optimal |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9668465/ https://www.ncbi.nlm.nih.gov/pubmed/36407688 http://dx.doi.org/10.1155/2022/8342638 |
work_keys_str_mv | AT yuxi federatedlearningoptimizationalgorithmforautomaticweightoptimal AT lili federatedlearningoptimizationalgorithmforautomaticweightoptimal AT hexin federatedlearningoptimizationalgorithmforautomaticweightoptimal AT chenshengbo federatedlearningoptimizationalgorithmforautomaticweightoptimal AT jianglei federatedlearningoptimizationalgorithmforautomaticweightoptimal |