Cargando…
An Edge Server Placement Method Based on Reinforcement Learning
In mobile edge computing systems, the edge server placement problem is mainly tackled as a multi-objective optimization problem and solved with mixed integer programming, heuristic or meta-heuristic algorithms, etc. These methods, however, have profound defect implications such as poor scalability,...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8946978/ https://www.ncbi.nlm.nih.gov/pubmed/35327828 http://dx.doi.org/10.3390/e24030317 |
_version_ | 1784674329227689984 |
---|---|
author | Luo, Fei Zheng, Shuai Ding, Weichao Fuentes, Joel Li, Yong |
author_facet | Luo, Fei Zheng, Shuai Ding, Weichao Fuentes, Joel Li, Yong |
author_sort | Luo, Fei |
collection | PubMed |
description | In mobile edge computing systems, the edge server placement problem is mainly tackled as a multi-objective optimization problem and solved with mixed integer programming, heuristic or meta-heuristic algorithms, etc. These methods, however, have profound defect implications such as poor scalability, local optimal solutions, and parameter tuning difficulties. To overcome these defects, we propose a novel edge server placement algorithm based on deep q-network and reinforcement learning, dubbed DQN-ESPA, which can achieve optimal placements without relying on previous placement experience. In DQN-ESPA, the edge server placement problem is modeled as a Markov decision process, which is formalized with the state space, action space and reward function, and it is subsequently solved using a reinforcement learning algorithm. Experimental results using real datasets from Shanghai Telecom show that DQN-ESPA outperforms state-of-the-art algorithms such as simulated annealing placement algorithm (SAPA), Top-K placement algorithm (TKPA), K-Means placement algorithm (KMPA), and random placement algorithm (RPA). In particular, with a comprehensive consideration of access delay and workload balance, DQN-ESPA achieves up to 13.40% and 15.54% better placement performance for 100 and 300 edge servers respectively. |
format | Online Article Text |
id | pubmed-8946978 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-89469782022-03-25 An Edge Server Placement Method Based on Reinforcement Learning Luo, Fei Zheng, Shuai Ding, Weichao Fuentes, Joel Li, Yong Entropy (Basel) Article In mobile edge computing systems, the edge server placement problem is mainly tackled as a multi-objective optimization problem and solved with mixed integer programming, heuristic or meta-heuristic algorithms, etc. These methods, however, have profound defect implications such as poor scalability, local optimal solutions, and parameter tuning difficulties. To overcome these defects, we propose a novel edge server placement algorithm based on deep q-network and reinforcement learning, dubbed DQN-ESPA, which can achieve optimal placements without relying on previous placement experience. In DQN-ESPA, the edge server placement problem is modeled as a Markov decision process, which is formalized with the state space, action space and reward function, and it is subsequently solved using a reinforcement learning algorithm. Experimental results using real datasets from Shanghai Telecom show that DQN-ESPA outperforms state-of-the-art algorithms such as simulated annealing placement algorithm (SAPA), Top-K placement algorithm (TKPA), K-Means placement algorithm (KMPA), and random placement algorithm (RPA). In particular, with a comprehensive consideration of access delay and workload balance, DQN-ESPA achieves up to 13.40% and 15.54% better placement performance for 100 and 300 edge servers respectively. MDPI 2022-02-23 /pmc/articles/PMC8946978/ /pubmed/35327828 http://dx.doi.org/10.3390/e24030317 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Luo, Fei Zheng, Shuai Ding, Weichao Fuentes, Joel Li, Yong An Edge Server Placement Method Based on Reinforcement Learning |
title | An Edge Server Placement Method Based on Reinforcement Learning |
title_full | An Edge Server Placement Method Based on Reinforcement Learning |
title_fullStr | An Edge Server Placement Method Based on Reinforcement Learning |
title_full_unstemmed | An Edge Server Placement Method Based on Reinforcement Learning |
title_short | An Edge Server Placement Method Based on Reinforcement Learning |
title_sort | edge server placement method based on reinforcement learning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8946978/ https://www.ncbi.nlm.nih.gov/pubmed/35327828 http://dx.doi.org/10.3390/e24030317 |
work_keys_str_mv | AT luofei anedgeserverplacementmethodbasedonreinforcementlearning AT zhengshuai anedgeserverplacementmethodbasedonreinforcementlearning AT dingweichao anedgeserverplacementmethodbasedonreinforcementlearning AT fuentesjoel anedgeserverplacementmethodbasedonreinforcementlearning AT liyong anedgeserverplacementmethodbasedonreinforcementlearning AT luofei edgeserverplacementmethodbasedonreinforcementlearning AT zhengshuai edgeserverplacementmethodbasedonreinforcementlearning AT dingweichao edgeserverplacementmethodbasedonreinforcementlearning AT fuentesjoel edgeserverplacementmethodbasedonreinforcementlearning AT liyong edgeserverplacementmethodbasedonreinforcementlearning |