Cargando…
Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design
As a promising distributed learning paradigm, federated learning (FL) faces the challenge of communication–computation bottlenecks in practical deployments. In this work, we mainly focus on the pruning, quantization, and coding of FL. By adopting a layer-wise operation, we propose an explicit and un...
Autores principales: | Zhu, Zheqi, Shi, Yuchen, Xin, Gangtao, Peng, Chenghui, Fan, Pingyi, Letaief, Khaled B. |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10453433/ https://www.ncbi.nlm.nih.gov/pubmed/37628235 http://dx.doi.org/10.3390/e25081205 |
Ejemplares similares
-
Why Shape Coding? Asymptotic Analysis of the Entropy Rate for Digital Images
por: Xin, Gangtao, et al.
Publicado: (2022) -
Towards Optimal Compression: Joint Pruning and Quantization
por: Zandonati, Ben, et al.
Publicado: (2023) -
Soft Compression for Lossless Image Coding Based on Shape Recognition
por: Xin, Gangtao, et al.
Publicado: (2021) -
Two-layer accumulated quantized compression for communication-efficient federated learning: TLAQC
por: Ren, Yaoyao, et al.
Publicado: (2023) -
Robust, practical and comprehensive analysis of soft compression image coding algorithms for big data
por: Xin, Gangtao, et al.
Publicado: (2023)