Cargando…
Research on imaging method of driver's attention area based on deep neural network
In the driving process, the driver's visual attention area is of great significance to the research of intelligent driving decision-making behavior and the dynamic research of driving behavior. Traditional driver intention recognition has problems such as large contact interference with wearing...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9525277/ https://www.ncbi.nlm.nih.gov/pubmed/36180777 http://dx.doi.org/10.1038/s41598-022-20829-w |
_version_ | 1784800673054851072 |
---|---|
author | Zhao, Shuanfeng Li, Yao Ma, Junjie Xing, Zhizhong Tang, Zenghui Zhu, Shibo |
author_facet | Zhao, Shuanfeng Li, Yao Ma, Junjie Xing, Zhizhong Tang, Zenghui Zhu, Shibo |
author_sort | Zhao, Shuanfeng |
collection | PubMed |
description | In the driving process, the driver's visual attention area is of great significance to the research of intelligent driving decision-making behavior and the dynamic research of driving behavior. Traditional driver intention recognition has problems such as large contact interference with wearing equipment, the high false detection rate for drivers wearing glasses and strong light, and unclear extraction of the field of view. We use the driver's field of vision image taken by the dash cam and the corresponding vehicle driving state data (steering wheel angle and vehicle speed). Combined with the interpretability method of the deep neural network, a method of imaging the driver's attention area is proposed. The basic idea of this method is to perform attention imaging analysis on the neural network virtual driver based on the vehicle driving state data, and then infer the visual attention area of the human driver. The results show that this method can realize the reverse reasoning of the driver's intention behavior during driving, image the driver's visual attention area, and provide a theoretical basis for the dynamic analysis of the driver's driving behavior and the further development of traffic safety analysis. |
format | Online Article Text |
id | pubmed-9525277 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-95252772022-10-02 Research on imaging method of driver's attention area based on deep neural network Zhao, Shuanfeng Li, Yao Ma, Junjie Xing, Zhizhong Tang, Zenghui Zhu, Shibo Sci Rep Article In the driving process, the driver's visual attention area is of great significance to the research of intelligent driving decision-making behavior and the dynamic research of driving behavior. Traditional driver intention recognition has problems such as large contact interference with wearing equipment, the high false detection rate for drivers wearing glasses and strong light, and unclear extraction of the field of view. We use the driver's field of vision image taken by the dash cam and the corresponding vehicle driving state data (steering wheel angle and vehicle speed). Combined with the interpretability method of the deep neural network, a method of imaging the driver's attention area is proposed. The basic idea of this method is to perform attention imaging analysis on the neural network virtual driver based on the vehicle driving state data, and then infer the visual attention area of the human driver. The results show that this method can realize the reverse reasoning of the driver's intention behavior during driving, image the driver's visual attention area, and provide a theoretical basis for the dynamic analysis of the driver's driving behavior and the further development of traffic safety analysis. Nature Publishing Group UK 2022-09-30 /pmc/articles/PMC9525277/ /pubmed/36180777 http://dx.doi.org/10.1038/s41598-022-20829-w Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Zhao, Shuanfeng Li, Yao Ma, Junjie Xing, Zhizhong Tang, Zenghui Zhu, Shibo Research on imaging method of driver's attention area based on deep neural network |
title | Research on imaging method of driver's attention area based on deep neural network |
title_full | Research on imaging method of driver's attention area based on deep neural network |
title_fullStr | Research on imaging method of driver's attention area based on deep neural network |
title_full_unstemmed | Research on imaging method of driver's attention area based on deep neural network |
title_short | Research on imaging method of driver's attention area based on deep neural network |
title_sort | research on imaging method of driver's attention area based on deep neural network |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9525277/ https://www.ncbi.nlm.nih.gov/pubmed/36180777 http://dx.doi.org/10.1038/s41598-022-20829-w |
work_keys_str_mv | AT zhaoshuanfeng researchonimagingmethodofdriversattentionareabasedondeepneuralnetwork AT liyao researchonimagingmethodofdriversattentionareabasedondeepneuralnetwork AT majunjie researchonimagingmethodofdriversattentionareabasedondeepneuralnetwork AT xingzhizhong researchonimagingmethodofdriversattentionareabasedondeepneuralnetwork AT tangzenghui researchonimagingmethodofdriversattentionareabasedondeepneuralnetwork AT zhushibo researchonimagingmethodofdriversattentionareabasedondeepneuralnetwork |