Cargando…
Two-layer accumulated quantized compression for communication-efficient federated learning: TLAQC
Federated learning enables multiple nodes to perform local computations and collaborate to complete machine learning tasks without centralizing private data of nodes. However, the frequent model gradients upload/download operations required by the framework result in high communication costs, which...
Autores principales: | Ren, Yaoyao, Cao, Yu, Ye, Chengyin, Cheng, Xu |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10356777/ https://www.ncbi.nlm.nih.gov/pubmed/37468562 http://dx.doi.org/10.1038/s41598-023-38916-x |
Ejemplares similares
-
Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design
por: Zhu, Zheqi, et al.
Publicado: (2023) -
Communication-efficient federated learning via knowledge distillation
por: Wu, Chuhan, et al.
Publicado: (2022) -
Towards Optimal Compression: Joint Pruning and Quantization
por: Zandonati, Ben, et al.
Publicado: (2023) -
Communication-Efficient and Privacy-Preserving Verifiable Aggregation for Federated Learning
por: Peng, Kaixin, et al.
Publicado: (2023) -
Accumulative Quantization for Approximate Nearest Neighbor Search
por: Ai, Liefu, et al.
Publicado: (2022)