Cargando…
3D car-detection based on a Mobile Deep Sensor Fusion Model and real-scene applications
Unmanned vehicles need to make a comprehensive perception of the surrounding environmental information during driving. Perception of automotive information is of significance. In the field of automotive perception, the sterevision of car-detection plays a vital role and sterevision can calculate the...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7470372/ https://www.ncbi.nlm.nih.gov/pubmed/32881926 http://dx.doi.org/10.1371/journal.pone.0236947 |
_version_ | 1783578573778452480 |
---|---|
author | Zhang, Qiang Hu, Xiaojian Su, Ziyi Song, Zhihong |
author_facet | Zhang, Qiang Hu, Xiaojian Su, Ziyi Song, Zhihong |
author_sort | Zhang, Qiang |
collection | PubMed |
description | Unmanned vehicles need to make a comprehensive perception of the surrounding environmental information during driving. Perception of automotive information is of significance. In the field of automotive perception, the sterevision of car-detection plays a vital role and sterevision can calculate the length, width, and height of a car, making the car more specific. However, under the existing technology, it is impossible to obtain accurate detection in a complex environment by relying on a single sensor. Therefore, it is particularly important to study the complex sensing technology based on multi-sensor fusion. Recently, with the development of deep learning in the field of vision, a mobile sensor-fusion method based on deep learning is proposed and applied in this paper——Mobile Deep Sensor Fusion Model (MDSFM). The content of this article is as follows. It does a data processing that projects 3D data to 2D data, which can form a dataset suitable for the model, thereby training data more efficiently. In the modules of LiDAR, it uses a revised squeezeNet structure to lighten the model and reduce parameters. In the modules of cameras, it uses the improved design of detecting module in R-CNN with a Mobile Spatial Attention Module (MSAM). In the fused part, it uses a dual-view deep fusing structure. And then it selects images from the KITTI’s datasets for validation to test this model. Compared with other recognized methods, it shows that our model has a fairly good performance. Finally, it implements a ROS program on the experimental car and our model is in good condition. The result shows that it can improve performance of detecting easy cars significantly through MDSFM. It increases the quality of the detected data and improves the generalized ability of car-detection model. It improves contextual relevance and preserves background information. It remains stable in driverless environments. It is applied in the realistic scenario and proves that the model has a good practical value. |
format | Online Article Text |
id | pubmed-7470372 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-74703722020-09-11 3D car-detection based on a Mobile Deep Sensor Fusion Model and real-scene applications Zhang, Qiang Hu, Xiaojian Su, Ziyi Song, Zhihong PLoS One Research Article Unmanned vehicles need to make a comprehensive perception of the surrounding environmental information during driving. Perception of automotive information is of significance. In the field of automotive perception, the sterevision of car-detection plays a vital role and sterevision can calculate the length, width, and height of a car, making the car more specific. However, under the existing technology, it is impossible to obtain accurate detection in a complex environment by relying on a single sensor. Therefore, it is particularly important to study the complex sensing technology based on multi-sensor fusion. Recently, with the development of deep learning in the field of vision, a mobile sensor-fusion method based on deep learning is proposed and applied in this paper——Mobile Deep Sensor Fusion Model (MDSFM). The content of this article is as follows. It does a data processing that projects 3D data to 2D data, which can form a dataset suitable for the model, thereby training data more efficiently. In the modules of LiDAR, it uses a revised squeezeNet structure to lighten the model and reduce parameters. In the modules of cameras, it uses the improved design of detecting module in R-CNN with a Mobile Spatial Attention Module (MSAM). In the fused part, it uses a dual-view deep fusing structure. And then it selects images from the KITTI’s datasets for validation to test this model. Compared with other recognized methods, it shows that our model has a fairly good performance. Finally, it implements a ROS program on the experimental car and our model is in good condition. The result shows that it can improve performance of detecting easy cars significantly through MDSFM. It increases the quality of the detected data and improves the generalized ability of car-detection model. It improves contextual relevance and preserves background information. It remains stable in driverless environments. It is applied in the realistic scenario and proves that the model has a good practical value. Public Library of Science 2020-09-03 /pmc/articles/PMC7470372/ /pubmed/32881926 http://dx.doi.org/10.1371/journal.pone.0236947 Text en © 2020 Zhang et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Zhang, Qiang Hu, Xiaojian Su, Ziyi Song, Zhihong 3D car-detection based on a Mobile Deep Sensor Fusion Model and real-scene applications |
title | 3D car-detection based on a Mobile Deep Sensor Fusion Model and real-scene applications |
title_full | 3D car-detection based on a Mobile Deep Sensor Fusion Model and real-scene applications |
title_fullStr | 3D car-detection based on a Mobile Deep Sensor Fusion Model and real-scene applications |
title_full_unstemmed | 3D car-detection based on a Mobile Deep Sensor Fusion Model and real-scene applications |
title_short | 3D car-detection based on a Mobile Deep Sensor Fusion Model and real-scene applications |
title_sort | 3d car-detection based on a mobile deep sensor fusion model and real-scene applications |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7470372/ https://www.ncbi.nlm.nih.gov/pubmed/32881926 http://dx.doi.org/10.1371/journal.pone.0236947 |
work_keys_str_mv | AT zhangqiang 3dcardetectionbasedonamobiledeepsensorfusionmodelandrealsceneapplications AT huxiaojian 3dcardetectionbasedonamobiledeepsensorfusionmodelandrealsceneapplications AT suziyi 3dcardetectionbasedonamobiledeepsensorfusionmodelandrealsceneapplications AT songzhihong 3dcardetectionbasedonamobiledeepsensorfusionmodelandrealsceneapplications |