Cargando…
Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems
Deep learning-based Human Activity Recognition (HAR) systems received a lot of interest for health monitoring and activity tracking on wearable devices. The availability of large and representative datasets is often a requirement for training accurate deep learning models. To keep private data on us...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9823596/ https://www.ncbi.nlm.nih.gov/pubmed/36616609 http://dx.doi.org/10.3390/s23010006 |
_version_ | 1784866198850109440 |
---|---|
author | Gad, Gad Fadlullah, Zubair |
author_facet | Gad, Gad Fadlullah, Zubair |
author_sort | Gad, Gad |
collection | PubMed |
description | Deep learning-based Human Activity Recognition (HAR) systems received a lot of interest for health monitoring and activity tracking on wearable devices. The availability of large and representative datasets is often a requirement for training accurate deep learning models. To keep private data on users’ devices while utilizing them to train deep learning models on huge datasets, Federated Learning (FL) was introduced as an inherently private distributed training paradigm. However, standard FL (FedAvg) lacks the capability to train heterogeneous model architectures. In this paper, we propose Federated Learning via Augmented Knowledge Distillation (FedAKD) for distributed training of heterogeneous models. FedAKD is evaluated on two HAR datasets: A waist-mounted tabular HAR dataset and a wrist-mounted time-series HAR dataset. FedAKD is more flexible than standard federated learning (FedAvg) as it enables collaborative heterogeneous deep learning models with various learning capacities. In the considered FL experiments, the communication overhead under FedAKD is 200X less compared with FL methods that communicate models’ gradients/weights. Relative to other model-agnostic FL methods, results show that FedAKD boosts performance gains of clients by up to 20 percent. Furthermore, FedAKD is shown to be relatively more robust under statistical heterogeneous scenarios. |
format | Online Article Text |
id | pubmed-9823596 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-98235962023-01-08 Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems Gad, Gad Fadlullah, Zubair Sensors (Basel) Article Deep learning-based Human Activity Recognition (HAR) systems received a lot of interest for health monitoring and activity tracking on wearable devices. The availability of large and representative datasets is often a requirement for training accurate deep learning models. To keep private data on users’ devices while utilizing them to train deep learning models on huge datasets, Federated Learning (FL) was introduced as an inherently private distributed training paradigm. However, standard FL (FedAvg) lacks the capability to train heterogeneous model architectures. In this paper, we propose Federated Learning via Augmented Knowledge Distillation (FedAKD) for distributed training of heterogeneous models. FedAKD is evaluated on two HAR datasets: A waist-mounted tabular HAR dataset and a wrist-mounted time-series HAR dataset. FedAKD is more flexible than standard federated learning (FedAvg) as it enables collaborative heterogeneous deep learning models with various learning capacities. In the considered FL experiments, the communication overhead under FedAKD is 200X less compared with FL methods that communicate models’ gradients/weights. Relative to other model-agnostic FL methods, results show that FedAKD boosts performance gains of clients by up to 20 percent. Furthermore, FedAKD is shown to be relatively more robust under statistical heterogeneous scenarios. MDPI 2022-12-20 /pmc/articles/PMC9823596/ /pubmed/36616609 http://dx.doi.org/10.3390/s23010006 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Gad, Gad Fadlullah, Zubair Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems |
title | Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems |
title_full | Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems |
title_fullStr | Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems |
title_full_unstemmed | Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems |
title_short | Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems |
title_sort | federated learning via augmented knowledge distillation for heterogenous deep human activity recognition systems |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9823596/ https://www.ncbi.nlm.nih.gov/pubmed/36616609 http://dx.doi.org/10.3390/s23010006 |
work_keys_str_mv | AT gadgad federatedlearningviaaugmentedknowledgedistillationforheterogenousdeephumanactivityrecognitionsystems AT fadlullahzubair federatedlearningviaaugmentedknowledgedistillationforheterogenousdeephumanactivityrecognitionsystems |