Cargando…
Visual recognition for urban traffic data retrieval and analysis in major events using convolutional neural networks
Accurate and prompt traffic data are necessary for the successful management of major events. Computer vision techniques, such as convolutional neural network (CNN) applied on video monitoring data, can provide a cost-efficient and timely alternative to traditional data collection and analysis metho...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Singapore
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8731210/ https://www.ncbi.nlm.nih.gov/pubmed/35013737 http://dx.doi.org/10.1007/s43762-021-00031-w |
_version_ | 1784627309958922240 |
---|---|
author | Pi, Yalong Duffield, Nick Behzadan, Amir H. Lomax, Tim |
author_facet | Pi, Yalong Duffield, Nick Behzadan, Amir H. Lomax, Tim |
author_sort | Pi, Yalong |
collection | PubMed |
description | Accurate and prompt traffic data are necessary for the successful management of major events. Computer vision techniques, such as convolutional neural network (CNN) applied on video monitoring data, can provide a cost-efficient and timely alternative to traditional data collection and analysis methods. This paper presents a framework designed to take videos as input and output traffic volume counts and intersection turning patterns. This framework comprises a CNN model and an object tracking algorithm to detect and track vehicles in the camera’s pixel view first. Homographic projection then maps vehicle spatial-temporal information (including unique ID, location, and timestamp) onto an orthogonal real-scale map, from which the traffic counts and turns are computed. Several video data are manually labeled and compared with the framework output. The following results show a robust traffic volume count accuracy up to 96.91%. Moreover, this work investigates the performance influencing factors including lighting condition (over a 24-h-period), pixel size, and camera angle. Based on the analysis, it is suggested to place cameras such that detection pixel size is above 2343 and the view angle is below 22°, for more accurate counts. Next, previous and current traffic reports after Texas A&M home football games are compared with the framework output. Results suggest that the proposed framework is able to reproduce traffic volume change trends for different traffic directions. Lastly, this work also contributes a new intersection turning pattern, i.e., counts for each ingress-egress edge pair, with its optimization technique which result in an accuracy between 43% and 72%. |
format | Online Article Text |
id | pubmed-8731210 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer Singapore |
record_format | MEDLINE/PubMed |
spelling | pubmed-87312102022-01-06 Visual recognition for urban traffic data retrieval and analysis in major events using convolutional neural networks Pi, Yalong Duffield, Nick Behzadan, Amir H. Lomax, Tim Comput Urban Sci Original Paper Accurate and prompt traffic data are necessary for the successful management of major events. Computer vision techniques, such as convolutional neural network (CNN) applied on video monitoring data, can provide a cost-efficient and timely alternative to traditional data collection and analysis methods. This paper presents a framework designed to take videos as input and output traffic volume counts and intersection turning patterns. This framework comprises a CNN model and an object tracking algorithm to detect and track vehicles in the camera’s pixel view first. Homographic projection then maps vehicle spatial-temporal information (including unique ID, location, and timestamp) onto an orthogonal real-scale map, from which the traffic counts and turns are computed. Several video data are manually labeled and compared with the framework output. The following results show a robust traffic volume count accuracy up to 96.91%. Moreover, this work investigates the performance influencing factors including lighting condition (over a 24-h-period), pixel size, and camera angle. Based on the analysis, it is suggested to place cameras such that detection pixel size is above 2343 and the view angle is below 22°, for more accurate counts. Next, previous and current traffic reports after Texas A&M home football games are compared with the framework output. Results suggest that the proposed framework is able to reproduce traffic volume change trends for different traffic directions. Lastly, this work also contributes a new intersection turning pattern, i.e., counts for each ingress-egress edge pair, with its optimization technique which result in an accuracy between 43% and 72%. Springer Singapore 2022-01-06 2022 /pmc/articles/PMC8731210/ /pubmed/35013737 http://dx.doi.org/10.1007/s43762-021-00031-w Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Original Paper Pi, Yalong Duffield, Nick Behzadan, Amir H. Lomax, Tim Visual recognition for urban traffic data retrieval and analysis in major events using convolutional neural networks |
title | Visual recognition for urban traffic data retrieval and analysis in major events using convolutional neural networks |
title_full | Visual recognition for urban traffic data retrieval and analysis in major events using convolutional neural networks |
title_fullStr | Visual recognition for urban traffic data retrieval and analysis in major events using convolutional neural networks |
title_full_unstemmed | Visual recognition for urban traffic data retrieval and analysis in major events using convolutional neural networks |
title_short | Visual recognition for urban traffic data retrieval and analysis in major events using convolutional neural networks |
title_sort | visual recognition for urban traffic data retrieval and analysis in major events using convolutional neural networks |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8731210/ https://www.ncbi.nlm.nih.gov/pubmed/35013737 http://dx.doi.org/10.1007/s43762-021-00031-w |
work_keys_str_mv | AT piyalong visualrecognitionforurbantrafficdataretrievalandanalysisinmajoreventsusingconvolutionalneuralnetworks AT duffieldnick visualrecognitionforurbantrafficdataretrievalandanalysisinmajoreventsusingconvolutionalneuralnetworks AT behzadanamirh visualrecognitionforurbantrafficdataretrievalandanalysisinmajoreventsusingconvolutionalneuralnetworks AT lomaxtim visualrecognitionforurbantrafficdataretrievalandanalysisinmajoreventsusingconvolutionalneuralnetworks |