Cargando…

FedRAD: Heterogeneous Federated Learning via Relational Adaptive Distillation

As the development of the Internet of Things (IoT) continues, Federated Learning (FL) is gaining popularity as a distributed machine learning framework that does not compromise the data privacy of each participant. However, the data held by enterprises and factories in the IoT often have different d...

Descripción completa

Detalles Bibliográficos
Autores principales: Tang, Jianwu, Ding, Xuefeng, Hu, Dasha, Guo, Bing, Shen, Yuncheng, Ma, Pan, Jiang, Yuming
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10385861/
https://www.ncbi.nlm.nih.gov/pubmed/37514811
http://dx.doi.org/10.3390/s23146518
_version_ 1785081516786712576
author Tang, Jianwu
Ding, Xuefeng
Hu, Dasha
Guo, Bing
Shen, Yuncheng
Ma, Pan
Jiang, Yuming
author_facet Tang, Jianwu
Ding, Xuefeng
Hu, Dasha
Guo, Bing
Shen, Yuncheng
Ma, Pan
Jiang, Yuming
author_sort Tang, Jianwu
collection PubMed
description As the development of the Internet of Things (IoT) continues, Federated Learning (FL) is gaining popularity as a distributed machine learning framework that does not compromise the data privacy of each participant. However, the data held by enterprises and factories in the IoT often have different distribution properties (Non-IID), leading to poor results in their federated learning. This problem causes clients to forget about global knowledge during their local training phase and then tends to slow convergence and degrades accuracy. In this work, we propose a method named FedRAD, which is based on relational knowledge distillation that further enhances the mining of high-quality global knowledge by local models from a higher-dimensional perspective during their local training phase to better retain global knowledge and avoid forgetting. At the same time, we devise an entropy-wise adaptive weights module (EWAW) to better regulate the proportion of loss in single-sample knowledge distillation versus relational knowledge distillation so that students can weigh losses based on predicted entropy and learn global knowledge more effectively. A series of experiments on CIFAR10 and CIFAR100 show that FedRAD has better performance in terms of convergence speed and classification accuracy compared to other advanced FL methods.
format Online
Article
Text
id pubmed-10385861
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-103858612023-07-30 FedRAD: Heterogeneous Federated Learning via Relational Adaptive Distillation Tang, Jianwu Ding, Xuefeng Hu, Dasha Guo, Bing Shen, Yuncheng Ma, Pan Jiang, Yuming Sensors (Basel) Article As the development of the Internet of Things (IoT) continues, Federated Learning (FL) is gaining popularity as a distributed machine learning framework that does not compromise the data privacy of each participant. However, the data held by enterprises and factories in the IoT often have different distribution properties (Non-IID), leading to poor results in their federated learning. This problem causes clients to forget about global knowledge during their local training phase and then tends to slow convergence and degrades accuracy. In this work, we propose a method named FedRAD, which is based on relational knowledge distillation that further enhances the mining of high-quality global knowledge by local models from a higher-dimensional perspective during their local training phase to better retain global knowledge and avoid forgetting. At the same time, we devise an entropy-wise adaptive weights module (EWAW) to better regulate the proportion of loss in single-sample knowledge distillation versus relational knowledge distillation so that students can weigh losses based on predicted entropy and learn global knowledge more effectively. A series of experiments on CIFAR10 and CIFAR100 show that FedRAD has better performance in terms of convergence speed and classification accuracy compared to other advanced FL methods. MDPI 2023-07-19 /pmc/articles/PMC10385861/ /pubmed/37514811 http://dx.doi.org/10.3390/s23146518 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Tang, Jianwu
Ding, Xuefeng
Hu, Dasha
Guo, Bing
Shen, Yuncheng
Ma, Pan
Jiang, Yuming
FedRAD: Heterogeneous Federated Learning via Relational Adaptive Distillation
title FedRAD: Heterogeneous Federated Learning via Relational Adaptive Distillation
title_full FedRAD: Heterogeneous Federated Learning via Relational Adaptive Distillation
title_fullStr FedRAD: Heterogeneous Federated Learning via Relational Adaptive Distillation
title_full_unstemmed FedRAD: Heterogeneous Federated Learning via Relational Adaptive Distillation
title_short FedRAD: Heterogeneous Federated Learning via Relational Adaptive Distillation
title_sort fedrad: heterogeneous federated learning via relational adaptive distillation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10385861/
https://www.ncbi.nlm.nih.gov/pubmed/37514811
http://dx.doi.org/10.3390/s23146518
work_keys_str_mv AT tangjianwu fedradheterogeneousfederatedlearningviarelationaladaptivedistillation
AT dingxuefeng fedradheterogeneousfederatedlearningviarelationaladaptivedistillation
AT hudasha fedradheterogeneousfederatedlearningviarelationaladaptivedistillation
AT guobing fedradheterogeneousfederatedlearningviarelationaladaptivedistillation
AT shenyuncheng fedradheterogeneousfederatedlearningviarelationaladaptivedistillation
AT mapan fedradheterogeneousfederatedlearningviarelationaladaptivedistillation
AT jiangyuming fedradheterogeneousfederatedlearningviarelationaladaptivedistillation