Cargando…
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper descri...
Autores principales: | , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2017
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5677357/ https://www.ncbi.nlm.nih.gov/pubmed/28934163 http://dx.doi.org/10.3390/s17102172 |
_version_ | 1783277226437902336 |
---|---|
author | Zhang, Jilin Tu, Hangdi Ren, Yongjian Wan, Jian Zhou, Li Li, Mingwei Wang, Jue Yu, Lifeng Zhao, Chang Zhang, Lei |
author_facet | Zhang, Jilin Tu, Hangdi Ren, Yongjian Wan, Jian Zhou, Li Li, Mingwei Wang, Jue Yu, Lifeng Zhao, Chang Zhang, Lei |
author_sort | Zhang, Jilin |
collection | PubMed |
description | In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors. |
format | Online Article Text |
id | pubmed-5677357 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2017 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-56773572017-11-17 A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors Zhang, Jilin Tu, Hangdi Ren, Yongjian Wan, Jian Zhou, Li Li, Mingwei Wang, Jue Yu, Lifeng Zhao, Chang Zhang, Lei Sensors (Basel) Article In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors. MDPI 2017-09-21 /pmc/articles/PMC5677357/ /pubmed/28934163 http://dx.doi.org/10.3390/s17102172 Text en © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zhang, Jilin Tu, Hangdi Ren, Yongjian Wan, Jian Zhou, Li Li, Mingwei Wang, Jue Yu, Lifeng Zhao, Chang Zhang, Lei A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors |
title | A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors |
title_full | A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors |
title_fullStr | A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors |
title_full_unstemmed | A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors |
title_short | A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors |
title_sort | parameter communication optimization strategy for distributed machine learning in sensors |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5677357/ https://www.ncbi.nlm.nih.gov/pubmed/28934163 http://dx.doi.org/10.3390/s17102172 |
work_keys_str_mv | AT zhangjilin aparametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT tuhangdi aparametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT renyongjian aparametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT wanjian aparametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT zhouli aparametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT limingwei aparametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT wangjue aparametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT yulifeng aparametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT zhaochang aparametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT zhanglei aparametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT zhangjilin parametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT tuhangdi parametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT renyongjian parametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT wanjian parametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT zhouli parametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT limingwei parametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT wangjue parametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT yulifeng parametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT zhaochang parametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors AT zhanglei parametercommunicationoptimizationstrategyfordistributedmachinelearninginsensors |