Cargando…
Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design
As a promising distributed learning paradigm, federated learning (FL) faces the challenge of communication–computation bottlenecks in practical deployments. In this work, we mainly focus on the pruning, quantization, and coding of FL. By adopting a layer-wise operation, we propose an explicit and un...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10453433/ https://www.ncbi.nlm.nih.gov/pubmed/37628235 http://dx.doi.org/10.3390/e25081205 |
_version_ | 1785095934975148032 |
---|---|
author | Zhu, Zheqi Shi, Yuchen Xin, Gangtao Peng, Chenghui Fan, Pingyi Letaief, Khaled B. |
author_facet | Zhu, Zheqi Shi, Yuchen Xin, Gangtao Peng, Chenghui Fan, Pingyi Letaief, Khaled B. |
author_sort | Zhu, Zheqi |
collection | PubMed |
description | As a promising distributed learning paradigm, federated learning (FL) faces the challenge of communication–computation bottlenecks in practical deployments. In this work, we mainly focus on the pruning, quantization, and coding of FL. By adopting a layer-wise operation, we propose an explicit and universal scheme: FedLP-Q (federated learning with layer-wise pruning-quantization). Pruning strategies for homogeneity/heterogeneity scenarios, the stochastic quantization rule, and the corresponding coding scheme were developed. Both theoretical and experimental evaluations suggest that FedLP-Q improves the system efficiency of communication and computation with controllable performance degradation. The key novelty of FedLP-Q is that it serves as a joint pruning-quantization FL framework with layer-wise processing and can easily be applied in practical FL systems. |
format | Online Article Text |
id | pubmed-10453433 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-104534332023-08-26 Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design Zhu, Zheqi Shi, Yuchen Xin, Gangtao Peng, Chenghui Fan, Pingyi Letaief, Khaled B. Entropy (Basel) Article As a promising distributed learning paradigm, federated learning (FL) faces the challenge of communication–computation bottlenecks in practical deployments. In this work, we mainly focus on the pruning, quantization, and coding of FL. By adopting a layer-wise operation, we propose an explicit and universal scheme: FedLP-Q (federated learning with layer-wise pruning-quantization). Pruning strategies for homogeneity/heterogeneity scenarios, the stochastic quantization rule, and the corresponding coding scheme were developed. Both theoretical and experimental evaluations suggest that FedLP-Q improves the system efficiency of communication and computation with controllable performance degradation. The key novelty of FedLP-Q is that it serves as a joint pruning-quantization FL framework with layer-wise processing and can easily be applied in practical FL systems. MDPI 2023-08-14 /pmc/articles/PMC10453433/ /pubmed/37628235 http://dx.doi.org/10.3390/e25081205 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zhu, Zheqi Shi, Yuchen Xin, Gangtao Peng, Chenghui Fan, Pingyi Letaief, Khaled B. Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design |
title | Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design |
title_full | Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design |
title_fullStr | Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design |
title_full_unstemmed | Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design |
title_short | Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design |
title_sort | towards efficient federated learning: layer-wise pruning-quantization scheme and coding design |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10453433/ https://www.ncbi.nlm.nih.gov/pubmed/37628235 http://dx.doi.org/10.3390/e25081205 |
work_keys_str_mv | AT zhuzheqi towardsefficientfederatedlearninglayerwisepruningquantizationschemeandcodingdesign AT shiyuchen towardsefficientfederatedlearninglayerwisepruningquantizationschemeandcodingdesign AT xingangtao towardsefficientfederatedlearninglayerwisepruningquantizationschemeandcodingdesign AT pengchenghui towardsefficientfederatedlearninglayerwisepruningquantizationschemeandcodingdesign AT fanpingyi towardsefficientfederatedlearninglayerwisepruningquantizationschemeandcodingdesign AT letaiefkhaledb towardsefficientfederatedlearninglayerwisepruningquantizationschemeandcodingdesign |