Cargando…

Deep Reinforcement Learning for Charging Scheduling of Electric Vehicles Considering Distribution Network Voltage Stability

The rapid development of electric vehicle (EV) technology and the consequent charging demand have brought challenges to the stable operation of distribution networks (DNs). The problem of the collaborative optimization of the charging scheduling of EVs and voltage control of the DN is intractable be...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Ding, Zeng, Peng, Cui, Shijie, Song, Chunhe
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9920735/
https://www.ncbi.nlm.nih.gov/pubmed/36772657
http://dx.doi.org/10.3390/s23031618
_version_ 1784887142491619328
author Liu, Ding
Zeng, Peng
Cui, Shijie
Song, Chunhe
author_facet Liu, Ding
Zeng, Peng
Cui, Shijie
Song, Chunhe
author_sort Liu, Ding
collection PubMed
description The rapid development of electric vehicle (EV) technology and the consequent charging demand have brought challenges to the stable operation of distribution networks (DNs). The problem of the collaborative optimization of the charging scheduling of EVs and voltage control of the DN is intractable because the uncertainties of both EVs and the DN need to be considered. In this paper, we propose a deep reinforcement learning (DRL) approach to coordinate EV charging scheduling and distribution network voltage control. The DRL-based strategy contains two layers, the upper layer aims to reduce the operating costs of power generation of distributed generators and power consumption of EVs, and the lower layer controls the Volt/Var devices to maintain the voltage stability of the distribution network. We model the coordinate EV charging scheduling and voltage control problem in the distribution network as a Markov decision process (MDP). The model considers uncertainties of charging process caused by the charging behavior of EV users, as well as the uncertainty of uncontrollable load, system dynamic electricity price and renewable energy generation. Since the model has a dynamic state space and mixed action outputs, a framework of deep deterministic policy gradient (DDPG) is adopted to train the two-layer agent and the policy network is designed to output discrete and continuous control actions. Simulation and numerical results on the IEEE-33 bus test system demonstrate the effectiveness of the proposed method in collaborative EV charging scheduling and distribution network voltage stabilization.
format Online
Article
Text
id pubmed-9920735
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-99207352023-02-12 Deep Reinforcement Learning for Charging Scheduling of Electric Vehicles Considering Distribution Network Voltage Stability Liu, Ding Zeng, Peng Cui, Shijie Song, Chunhe Sensors (Basel) Article The rapid development of electric vehicle (EV) technology and the consequent charging demand have brought challenges to the stable operation of distribution networks (DNs). The problem of the collaborative optimization of the charging scheduling of EVs and voltage control of the DN is intractable because the uncertainties of both EVs and the DN need to be considered. In this paper, we propose a deep reinforcement learning (DRL) approach to coordinate EV charging scheduling and distribution network voltage control. The DRL-based strategy contains two layers, the upper layer aims to reduce the operating costs of power generation of distributed generators and power consumption of EVs, and the lower layer controls the Volt/Var devices to maintain the voltage stability of the distribution network. We model the coordinate EV charging scheduling and voltage control problem in the distribution network as a Markov decision process (MDP). The model considers uncertainties of charging process caused by the charging behavior of EV users, as well as the uncertainty of uncontrollable load, system dynamic electricity price and renewable energy generation. Since the model has a dynamic state space and mixed action outputs, a framework of deep deterministic policy gradient (DDPG) is adopted to train the two-layer agent and the policy network is designed to output discrete and continuous control actions. Simulation and numerical results on the IEEE-33 bus test system demonstrate the effectiveness of the proposed method in collaborative EV charging scheduling and distribution network voltage stabilization. MDPI 2023-02-02 /pmc/articles/PMC9920735/ /pubmed/36772657 http://dx.doi.org/10.3390/s23031618 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Liu, Ding
Zeng, Peng
Cui, Shijie
Song, Chunhe
Deep Reinforcement Learning for Charging Scheduling of Electric Vehicles Considering Distribution Network Voltage Stability
title Deep Reinforcement Learning for Charging Scheduling of Electric Vehicles Considering Distribution Network Voltage Stability
title_full Deep Reinforcement Learning for Charging Scheduling of Electric Vehicles Considering Distribution Network Voltage Stability
title_fullStr Deep Reinforcement Learning for Charging Scheduling of Electric Vehicles Considering Distribution Network Voltage Stability
title_full_unstemmed Deep Reinforcement Learning for Charging Scheduling of Electric Vehicles Considering Distribution Network Voltage Stability
title_short Deep Reinforcement Learning for Charging Scheduling of Electric Vehicles Considering Distribution Network Voltage Stability
title_sort deep reinforcement learning for charging scheduling of electric vehicles considering distribution network voltage stability
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9920735/
https://www.ncbi.nlm.nih.gov/pubmed/36772657
http://dx.doi.org/10.3390/s23031618
work_keys_str_mv AT liuding deepreinforcementlearningforchargingschedulingofelectricvehiclesconsideringdistributionnetworkvoltagestability
AT zengpeng deepreinforcementlearningforchargingschedulingofelectricvehiclesconsideringdistributionnetworkvoltagestability
AT cuishijie deepreinforcementlearningforchargingschedulingofelectricvehiclesconsideringdistributionnetworkvoltagestability
AT songchunhe deepreinforcementlearningforchargingschedulingofelectricvehiclesconsideringdistributionnetworkvoltagestability