Cargando…

Deep Ego-Motion Classifiers for Compound Eye Cameras

Compound eyes, also known as insect eyes, have a unique structure. They have a hemispheric surface, and a lot of single eyes are deployed regularly on the surface. Thanks to this unique form, using the compound images has several advantages, such as a large field of view (FOV) with low aberrations....

Descripción completa

Detalles Bibliográficos
Autores principales: Yoo, Hwiyeon, Cha, Geonho, Oh, Songhwai
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6928859/
https://www.ncbi.nlm.nih.gov/pubmed/31795509
http://dx.doi.org/10.3390/s19235275
_version_ 1783482570071080960
author Yoo, Hwiyeon
Cha, Geonho
Oh, Songhwai
author_facet Yoo, Hwiyeon
Cha, Geonho
Oh, Songhwai
author_sort Yoo, Hwiyeon
collection PubMed
description Compound eyes, also known as insect eyes, have a unique structure. They have a hemispheric surface, and a lot of single eyes are deployed regularly on the surface. Thanks to this unique form, using the compound images has several advantages, such as a large field of view (FOV) with low aberrations. We can exploit these benefits in high-level vision applications, such as object recognition, or semantic segmentation for a moving robot, by emulating the compound images that describe the captured scenes from compound eye cameras. In this paper, to the best of our knowledge, we propose the first convolutional neural network (CNN)-based ego-motion classification algorithm designed for the compound eye structure. To achieve this, we introduce a voting-based approach that fully utilizes one of the unique features of compound images, specifically, the compound images consist of a lot of single eye images. The proposed method classifies a number of local motions by CNN, and these local classifications which represent the motions of each single eye image, are aggregated to the final classification by a voting procedure. For the experiments, we collected a new dataset for compound eye camera ego-motion classification which contains scenes of the inside and outside of a certain building. The samples of the proposed dataset consist of two consequent emulated compound images and the corresponding ego-motion class. The experimental results show that the proposed method has achieved the classification accuracy of 85.0%, which is superior compared to the baselines on the proposed dataset. Also, the proposed model is light-weight compared to the conventional CNN-based image recognition algorithms such as AlexNet, ResNet50, and MobileNetV2.
format Online
Article
Text
id pubmed-6928859
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-69288592019-12-26 Deep Ego-Motion Classifiers for Compound Eye Cameras Yoo, Hwiyeon Cha, Geonho Oh, Songhwai Sensors (Basel) Article Compound eyes, also known as insect eyes, have a unique structure. They have a hemispheric surface, and a lot of single eyes are deployed regularly on the surface. Thanks to this unique form, using the compound images has several advantages, such as a large field of view (FOV) with low aberrations. We can exploit these benefits in high-level vision applications, such as object recognition, or semantic segmentation for a moving robot, by emulating the compound images that describe the captured scenes from compound eye cameras. In this paper, to the best of our knowledge, we propose the first convolutional neural network (CNN)-based ego-motion classification algorithm designed for the compound eye structure. To achieve this, we introduce a voting-based approach that fully utilizes one of the unique features of compound images, specifically, the compound images consist of a lot of single eye images. The proposed method classifies a number of local motions by CNN, and these local classifications which represent the motions of each single eye image, are aggregated to the final classification by a voting procedure. For the experiments, we collected a new dataset for compound eye camera ego-motion classification which contains scenes of the inside and outside of a certain building. The samples of the proposed dataset consist of two consequent emulated compound images and the corresponding ego-motion class. The experimental results show that the proposed method has achieved the classification accuracy of 85.0%, which is superior compared to the baselines on the proposed dataset. Also, the proposed model is light-weight compared to the conventional CNN-based image recognition algorithms such as AlexNet, ResNet50, and MobileNetV2. MDPI 2019-11-29 /pmc/articles/PMC6928859/ /pubmed/31795509 http://dx.doi.org/10.3390/s19235275 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Yoo, Hwiyeon
Cha, Geonho
Oh, Songhwai
Deep Ego-Motion Classifiers for Compound Eye Cameras
title Deep Ego-Motion Classifiers for Compound Eye Cameras
title_full Deep Ego-Motion Classifiers for Compound Eye Cameras
title_fullStr Deep Ego-Motion Classifiers for Compound Eye Cameras
title_full_unstemmed Deep Ego-Motion Classifiers for Compound Eye Cameras
title_short Deep Ego-Motion Classifiers for Compound Eye Cameras
title_sort deep ego-motion classifiers for compound eye cameras
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6928859/
https://www.ncbi.nlm.nih.gov/pubmed/31795509
http://dx.doi.org/10.3390/s19235275
work_keys_str_mv AT yoohwiyeon deepegomotionclassifiersforcompoundeyecameras
AT chageonho deepegomotionclassifiersforcompoundeyecameras
AT ohsonghwai deepegomotionclassifiersforcompoundeyecameras