Cargando…

Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †

We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR,...

Descripción completa

Detalles Bibliográficos
Autores principales: Ito, Seigo, Hiratsuka, Shigeyoshi, Ohta, Mitsuhiko, Matsubara, Hiroyuki, Ogawa, Masaru
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5795864/
https://www.ncbi.nlm.nih.gov/pubmed/29320434
http://dx.doi.org/10.3390/s18010177
_version_ 1783297380696719360
author Ito, Seigo
Hiratsuka, Shigeyoshi
Ohta, Mitsuhiko
Matsubara, Hiroyuki
Ogawa, Masaru
author_facet Ito, Seigo
Hiratsuka, Shigeyoshi
Ohta, Mitsuhiko
Matsubara, Hiroyuki
Ogawa, Masaru
author_sort Ito, Seigo
collection PubMed
description We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.
format Online
Article
Text
id pubmed-5795864
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-57958642018-02-13 Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle † Ito, Seigo Hiratsuka, Shigeyoshi Ohta, Mitsuhiko Matsubara, Hiroyuki Ogawa, Masaru Sensors (Basel) Article We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy. MDPI 2018-01-10 /pmc/articles/PMC5795864/ /pubmed/29320434 http://dx.doi.org/10.3390/s18010177 Text en © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Ito, Seigo
Hiratsuka, Shigeyoshi
Ohta, Mitsuhiko
Matsubara, Hiroyuki
Ogawa, Masaru
Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †
title Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †
title_full Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †
title_fullStr Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †
title_full_unstemmed Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †
title_short Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †
title_sort small imaging depth lidar and dcnn-based localization for automated guided vehicle †
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5795864/
https://www.ncbi.nlm.nih.gov/pubmed/29320434
http://dx.doi.org/10.3390/s18010177
work_keys_str_mv AT itoseigo smallimagingdepthlidaranddcnnbasedlocalizationforautomatedguidedvehicle
AT hiratsukashigeyoshi smallimagingdepthlidaranddcnnbasedlocalizationforautomatedguidedvehicle
AT ohtamitsuhiko smallimagingdepthlidaranddcnnbasedlocalizationforautomatedguidedvehicle
AT matsubarahiroyuki smallimagingdepthlidaranddcnnbasedlocalizationforautomatedguidedvehicle
AT ogawamasaru smallimagingdepthlidaranddcnnbasedlocalizationforautomatedguidedvehicle