Cargando…

Efficient Gradient Updating Strategies with Adaptive Power Allocation for Federated Learning over Wireless Backhaul

In this paper, efficient gradient updating strategies are developed for the federated learning when distributed clients are connected to the server via a wireless backhaul link. Specifically, a common convolutional neural network (CNN) module is shared for all the distributed clients and it is train...

Descripción completa

Detalles Bibliográficos
Autores principales: Yang, Yunji, Hong, Yonggi, Park, Jaehyun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8537050/
https://www.ncbi.nlm.nih.gov/pubmed/34696003
http://dx.doi.org/10.3390/s21206791
_version_ 1784588156779102208
author Yang, Yunji
Hong, Yonggi
Park, Jaehyun
author_facet Yang, Yunji
Hong, Yonggi
Park, Jaehyun
author_sort Yang, Yunji
collection PubMed
description In this paper, efficient gradient updating strategies are developed for the federated learning when distributed clients are connected to the server via a wireless backhaul link. Specifically, a common convolutional neural network (CNN) module is shared for all the distributed clients and it is trained through the federated learning over wireless backhaul connected to the main server. However, during the training phase, local gradients need to be transferred from multiple clients to the server over wireless backhaul link and can be distorted due to wireless channel fading. To overcome it, an efficient gradient updating method is proposed, in which the gradients are combined such that the effective SNR is maximized at the server. In addition, when the backhaul links for all clients have small channel gain simultaneously, the server may have severely distorted gradient vectors. Accordingly, we also propose a binary gradient updating strategy based on thresholding in which the round associated with all channels having small channel gains is excluded from federated learning. Because each client has limited transmission power, it is effective to allocate more power on the channel slots carrying specific important information, rather than allocating power equally to all channel resources (equivalently, slots). Accordingly, we also propose an adaptive power allocation method, in which each client allocates its transmit power proportionally to the magnitude of the gradient information. This is because, when training a deep learning model, the gradient elements with large values imply the large change of weight to decrease the loss function.
format Online
Article
Text
id pubmed-8537050
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-85370502021-10-24 Efficient Gradient Updating Strategies with Adaptive Power Allocation for Federated Learning over Wireless Backhaul Yang, Yunji Hong, Yonggi Park, Jaehyun Sensors (Basel) Article In this paper, efficient gradient updating strategies are developed for the federated learning when distributed clients are connected to the server via a wireless backhaul link. Specifically, a common convolutional neural network (CNN) module is shared for all the distributed clients and it is trained through the federated learning over wireless backhaul connected to the main server. However, during the training phase, local gradients need to be transferred from multiple clients to the server over wireless backhaul link and can be distorted due to wireless channel fading. To overcome it, an efficient gradient updating method is proposed, in which the gradients are combined such that the effective SNR is maximized at the server. In addition, when the backhaul links for all clients have small channel gain simultaneously, the server may have severely distorted gradient vectors. Accordingly, we also propose a binary gradient updating strategy based on thresholding in which the round associated with all channels having small channel gains is excluded from federated learning. Because each client has limited transmission power, it is effective to allocate more power on the channel slots carrying specific important information, rather than allocating power equally to all channel resources (equivalently, slots). Accordingly, we also propose an adaptive power allocation method, in which each client allocates its transmit power proportionally to the magnitude of the gradient information. This is because, when training a deep learning model, the gradient elements with large values imply the large change of weight to decrease the loss function. MDPI 2021-10-13 /pmc/articles/PMC8537050/ /pubmed/34696003 http://dx.doi.org/10.3390/s21206791 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Yang, Yunji
Hong, Yonggi
Park, Jaehyun
Efficient Gradient Updating Strategies with Adaptive Power Allocation for Federated Learning over Wireless Backhaul
title Efficient Gradient Updating Strategies with Adaptive Power Allocation for Federated Learning over Wireless Backhaul
title_full Efficient Gradient Updating Strategies with Adaptive Power Allocation for Federated Learning over Wireless Backhaul
title_fullStr Efficient Gradient Updating Strategies with Adaptive Power Allocation for Federated Learning over Wireless Backhaul
title_full_unstemmed Efficient Gradient Updating Strategies with Adaptive Power Allocation for Federated Learning over Wireless Backhaul
title_short Efficient Gradient Updating Strategies with Adaptive Power Allocation for Federated Learning over Wireless Backhaul
title_sort efficient gradient updating strategies with adaptive power allocation for federated learning over wireless backhaul
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8537050/
https://www.ncbi.nlm.nih.gov/pubmed/34696003
http://dx.doi.org/10.3390/s21206791
work_keys_str_mv AT yangyunji efficientgradientupdatingstrategieswithadaptivepowerallocationforfederatedlearningoverwirelessbackhaul
AT hongyonggi efficientgradientupdatingstrategieswithadaptivepowerallocationforfederatedlearningoverwirelessbackhaul
AT parkjaehyun efficientgradientupdatingstrategieswithadaptivepowerallocationforfederatedlearningoverwirelessbackhaul