Cargando…

Modeling Car-Following Behaviors and Driving Styles with Generative Adversarial Imitation Learning

Building a human-like car-following model that can accurately simulate drivers’ car-following behaviors is helpful to the development of driving assistance systems and autonomous driving. Recent studies have shown the advantages of applying reinforcement learning methods in car-following modeling. H...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhou, Yang, Fu, Rui, Wang, Chang, Zhang, Ruibin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7571238/
https://www.ncbi.nlm.nih.gov/pubmed/32899773
http://dx.doi.org/10.3390/s20185034
_version_ 1783597131610718208
author Zhou, Yang
Fu, Rui
Wang, Chang
Zhang, Ruibin
author_facet Zhou, Yang
Fu, Rui
Wang, Chang
Zhang, Ruibin
author_sort Zhou, Yang
collection PubMed
description Building a human-like car-following model that can accurately simulate drivers’ car-following behaviors is helpful to the development of driving assistance systems and autonomous driving. Recent studies have shown the advantages of applying reinforcement learning methods in car-following modeling. However, a problem has remained where it is difficult to manually determine the reward function. This paper proposes a novel car-following model based on generative adversarial imitation learning. The proposed model can learn the strategy from drivers’ demonstrations without specifying the reward. Gated recurrent units was incorporated in the actor-critic network to enable the model to use historical information. Drivers’ car-following data collected by a test vehicle equipped with a millimeter-wave radar and controller area network acquisition card was used. The participants were divided into two driving styles by K-means with time-headway and time-headway when braking used as input features. Adopting five-fold cross-validation for model evaluation, the results show that the proposed model can reproduce drivers’ car-following trajectories and driving styles more accurately than the intelligent driver model and the recurrent neural network-based model, with the lowest average spacing error (19.40%) and speed validation error (5.57%), as well as the lowest Kullback-Leibler divergences of the two indicators used for driving style clustering.
format Online
Article
Text
id pubmed-7571238
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-75712382020-10-28 Modeling Car-Following Behaviors and Driving Styles with Generative Adversarial Imitation Learning Zhou, Yang Fu, Rui Wang, Chang Zhang, Ruibin Sensors (Basel) Article Building a human-like car-following model that can accurately simulate drivers’ car-following behaviors is helpful to the development of driving assistance systems and autonomous driving. Recent studies have shown the advantages of applying reinforcement learning methods in car-following modeling. However, a problem has remained where it is difficult to manually determine the reward function. This paper proposes a novel car-following model based on generative adversarial imitation learning. The proposed model can learn the strategy from drivers’ demonstrations without specifying the reward. Gated recurrent units was incorporated in the actor-critic network to enable the model to use historical information. Drivers’ car-following data collected by a test vehicle equipped with a millimeter-wave radar and controller area network acquisition card was used. The participants were divided into two driving styles by K-means with time-headway and time-headway when braking used as input features. Adopting five-fold cross-validation for model evaluation, the results show that the proposed model can reproduce drivers’ car-following trajectories and driving styles more accurately than the intelligent driver model and the recurrent neural network-based model, with the lowest average spacing error (19.40%) and speed validation error (5.57%), as well as the lowest Kullback-Leibler divergences of the two indicators used for driving style clustering. MDPI 2020-09-04 /pmc/articles/PMC7571238/ /pubmed/32899773 http://dx.doi.org/10.3390/s20185034 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Zhou, Yang
Fu, Rui
Wang, Chang
Zhang, Ruibin
Modeling Car-Following Behaviors and Driving Styles with Generative Adversarial Imitation Learning
title Modeling Car-Following Behaviors and Driving Styles with Generative Adversarial Imitation Learning
title_full Modeling Car-Following Behaviors and Driving Styles with Generative Adversarial Imitation Learning
title_fullStr Modeling Car-Following Behaviors and Driving Styles with Generative Adversarial Imitation Learning
title_full_unstemmed Modeling Car-Following Behaviors and Driving Styles with Generative Adversarial Imitation Learning
title_short Modeling Car-Following Behaviors and Driving Styles with Generative Adversarial Imitation Learning
title_sort modeling car-following behaviors and driving styles with generative adversarial imitation learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7571238/
https://www.ncbi.nlm.nih.gov/pubmed/32899773
http://dx.doi.org/10.3390/s20185034
work_keys_str_mv AT zhouyang modelingcarfollowingbehaviorsanddrivingstyleswithgenerativeadversarialimitationlearning
AT furui modelingcarfollowingbehaviorsanddrivingstyleswithgenerativeadversarialimitationlearning
AT wangchang modelingcarfollowingbehaviorsanddrivingstyleswithgenerativeadversarialimitationlearning
AT zhangruibin modelingcarfollowingbehaviorsanddrivingstyleswithgenerativeadversarialimitationlearning