Cargando…

Biased Pressure: Cyclic Reinforcement Learning Model for Intelligent Traffic Signal Control

Existing inefficient traffic signal plans are causing traffic congestions in many urban areas. In recent years, many deep reinforcement learning (RL) methods have been proposed to control traffic signals in real-time by interacting with the environment. However, most of existing state-of-the-art RL...

Descripción completa

Detalles Bibliográficos
Autores principales: Ibrokhimov, Bunyodbek, Kim, Young-Joo, Kang, Sanggil
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9002556/
https://www.ncbi.nlm.nih.gov/pubmed/35408431
http://dx.doi.org/10.3390/s22072818
_version_ 1784685919173869568
author Ibrokhimov, Bunyodbek
Kim, Young-Joo
Kang, Sanggil
author_facet Ibrokhimov, Bunyodbek
Kim, Young-Joo
Kang, Sanggil
author_sort Ibrokhimov, Bunyodbek
collection PubMed
description Existing inefficient traffic signal plans are causing traffic congestions in many urban areas. In recent years, many deep reinforcement learning (RL) methods have been proposed to control traffic signals in real-time by interacting with the environment. However, most of existing state-of-the-art RL methods use complex state definition and reward functions and/or neglect the real-world constraints such as cyclic phase order and minimum/maximum duration for each traffic phase. These issues make existing methods infeasible to implement for real-world applications. In this paper, we propose an RL-based multi-intersection traffic light control model with a simple yet effective combination of state, reward, and action definitions. The proposed model uses a novel pressure method called Biased Pressure (BP). We use a state-of-the-art advantage actor-critic learning mechanism in our model. Due to the decentralized nature of our state, reward, and action definitions, we achieve a scalable model. The performance of the proposed method is compared with related methods using both synthetic and real-world datasets. Experimental results show that our method outperforms the existing cyclic phase control methods with a significant margin in terms of throughput and average travel time. Moreover, we conduct ablation studies to justify the superiority of the BP method over the existing pressure methods.
format Online
Article
Text
id pubmed-9002556
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-90025562022-04-13 Biased Pressure: Cyclic Reinforcement Learning Model for Intelligent Traffic Signal Control Ibrokhimov, Bunyodbek Kim, Young-Joo Kang, Sanggil Sensors (Basel) Article Existing inefficient traffic signal plans are causing traffic congestions in many urban areas. In recent years, many deep reinforcement learning (RL) methods have been proposed to control traffic signals in real-time by interacting with the environment. However, most of existing state-of-the-art RL methods use complex state definition and reward functions and/or neglect the real-world constraints such as cyclic phase order and minimum/maximum duration for each traffic phase. These issues make existing methods infeasible to implement for real-world applications. In this paper, we propose an RL-based multi-intersection traffic light control model with a simple yet effective combination of state, reward, and action definitions. The proposed model uses a novel pressure method called Biased Pressure (BP). We use a state-of-the-art advantage actor-critic learning mechanism in our model. Due to the decentralized nature of our state, reward, and action definitions, we achieve a scalable model. The performance of the proposed method is compared with related methods using both synthetic and real-world datasets. Experimental results show that our method outperforms the existing cyclic phase control methods with a significant margin in terms of throughput and average travel time. Moreover, we conduct ablation studies to justify the superiority of the BP method over the existing pressure methods. MDPI 2022-04-06 /pmc/articles/PMC9002556/ /pubmed/35408431 http://dx.doi.org/10.3390/s22072818 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Ibrokhimov, Bunyodbek
Kim, Young-Joo
Kang, Sanggil
Biased Pressure: Cyclic Reinforcement Learning Model for Intelligent Traffic Signal Control
title Biased Pressure: Cyclic Reinforcement Learning Model for Intelligent Traffic Signal Control
title_full Biased Pressure: Cyclic Reinforcement Learning Model for Intelligent Traffic Signal Control
title_fullStr Biased Pressure: Cyclic Reinforcement Learning Model for Intelligent Traffic Signal Control
title_full_unstemmed Biased Pressure: Cyclic Reinforcement Learning Model for Intelligent Traffic Signal Control
title_short Biased Pressure: Cyclic Reinforcement Learning Model for Intelligent Traffic Signal Control
title_sort biased pressure: cyclic reinforcement learning model for intelligent traffic signal control
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9002556/
https://www.ncbi.nlm.nih.gov/pubmed/35408431
http://dx.doi.org/10.3390/s22072818
work_keys_str_mv AT ibrokhimovbunyodbek biasedpressurecyclicreinforcementlearningmodelforintelligenttrafficsignalcontrol
AT kimyoungjoo biasedpressurecyclicreinforcementlearningmodelforintelligenttrafficsignalcontrol
AT kangsanggil biasedpressurecyclicreinforcementlearningmodelforintelligenttrafficsignalcontrol