Cargando…

Deep Reinforcement Learning–Based Online One-to-Multiple Charging Scheme in Wireless Rechargeable Sensor Network

Wireless rechargeable sensor networks (WRSN) have been emerging as an effective solution to the energy constraint problem of wireless sensor networks (WSN). However, most of the existing charging schemes use Mobile Charging (MC) to charge nodes one-to-one and do not optimize MC scheduling from a mor...

Descripción completa

Detalles Bibliográficos
Autores principales: Gong, Zheng, Wu, Hao, Feng, Yong, Liu, Nianbo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10143104/
https://www.ncbi.nlm.nih.gov/pubmed/37112245
http://dx.doi.org/10.3390/s23083903
Descripción
Sumario:Wireless rechargeable sensor networks (WRSN) have been emerging as an effective solution to the energy constraint problem of wireless sensor networks (WSN). However, most of the existing charging schemes use Mobile Charging (MC) to charge nodes one-to-one and do not optimize MC scheduling from a more comprehensive perspective, leading to difficulties in meeting the huge energy demand of large-scale WSNs; therefore, one-to-multiple charging which can charge multiple nodes simultaneously may be a more reasonable choice. To achieve timely and efficient energy replenishment for large-scale WSN, we propose an online one-to-multiple charging scheme based on Deep Reinforcement Learning, which utilizes Double Dueling DQN (3DQN) to jointly optimize the scheduling of both the charging sequence of MC and the charging amount of nodes. The scheme cellularizes the whole network based on the effective charging distance of MC and uses 3DQN to determine the optimal charging cell sequence with the objective of minimizing dead nodes and adjusting the charging amount of each cell being recharged according to the nodes’ energy demand in the cell, the network survival time, and MC’s residual energy. To obtain better performance and timeliness to adapt to the varying environments, our scheme further utilizes Dueling DQN to improve the stability of training and uses Double DQN to reduce overestimation. Extensive simulation experiments show that our proposed scheme achieves better charging performance compared with several existing typical works, and it has significant advantages in terms of reducing node dead ratio and charging latency.