Cargando…
A Spiking Neural Network Model of Depth from Defocus for Event-based Neuromorphic Vision
Depth from defocus is an important mechanism that enables vision systems to perceive depth. While machine vision has developed several algorithms to estimate depth from the amount of defocus present at the focal plane, existing techniques are slow, energy demanding and mainly relying on numerous acq...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6403400/ https://www.ncbi.nlm.nih.gov/pubmed/30842458 http://dx.doi.org/10.1038/s41598-019-40064-0 |
_version_ | 1783400595498991616 |
---|---|
author | Haessig, Germain Berthelon, Xavier Ieng, Sio-Hoi Benosman, Ryad |
author_facet | Haessig, Germain Berthelon, Xavier Ieng, Sio-Hoi Benosman, Ryad |
author_sort | Haessig, Germain |
collection | PubMed |
description | Depth from defocus is an important mechanism that enables vision systems to perceive depth. While machine vision has developed several algorithms to estimate depth from the amount of defocus present at the focal plane, existing techniques are slow, energy demanding and mainly relying on numerous acquisitions and massive amounts of filtering operations on the pixels’ absolute luminance value. Recent advances in neuromorphic engineering allow an alternative to this problem, with the use of event-based silicon retinas and neural processing devices inspired by the organizing principles of the brain. In this paper, we present a low power, compact and computationally inexpensive setup to estimate depth in a 3D scene in real time at high rates that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. Exploiting the high temporal resolution of the event-based silicon retina, we are able to extract depth at 100 Hz for a power budget lower than a 200 mW (10 mW for the camera, 90 mW for the liquid lens and ~100 mW for the computation). We validate the model with experimental results, highlighting features that are consistent with both computational neuroscience and recent findings in the retina physiology. We demonstrate its efficiency with a prototype of a neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological depth from defocus experiments reported in the literature. |
format | Online Article Text |
id | pubmed-6403400 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-64034002019-03-11 A Spiking Neural Network Model of Depth from Defocus for Event-based Neuromorphic Vision Haessig, Germain Berthelon, Xavier Ieng, Sio-Hoi Benosman, Ryad Sci Rep Article Depth from defocus is an important mechanism that enables vision systems to perceive depth. While machine vision has developed several algorithms to estimate depth from the amount of defocus present at the focal plane, existing techniques are slow, energy demanding and mainly relying on numerous acquisitions and massive amounts of filtering operations on the pixels’ absolute luminance value. Recent advances in neuromorphic engineering allow an alternative to this problem, with the use of event-based silicon retinas and neural processing devices inspired by the organizing principles of the brain. In this paper, we present a low power, compact and computationally inexpensive setup to estimate depth in a 3D scene in real time at high rates that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. Exploiting the high temporal resolution of the event-based silicon retina, we are able to extract depth at 100 Hz for a power budget lower than a 200 mW (10 mW for the camera, 90 mW for the liquid lens and ~100 mW for the computation). We validate the model with experimental results, highlighting features that are consistent with both computational neuroscience and recent findings in the retina physiology. We demonstrate its efficiency with a prototype of a neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological depth from defocus experiments reported in the literature. Nature Publishing Group UK 2019-03-06 /pmc/articles/PMC6403400/ /pubmed/30842458 http://dx.doi.org/10.1038/s41598-019-40064-0 Text en © The Author(s) 2019 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. |
spellingShingle | Article Haessig, Germain Berthelon, Xavier Ieng, Sio-Hoi Benosman, Ryad A Spiking Neural Network Model of Depth from Defocus for Event-based Neuromorphic Vision |
title | A Spiking Neural Network Model of Depth from Defocus for Event-based Neuromorphic Vision |
title_full | A Spiking Neural Network Model of Depth from Defocus for Event-based Neuromorphic Vision |
title_fullStr | A Spiking Neural Network Model of Depth from Defocus for Event-based Neuromorphic Vision |
title_full_unstemmed | A Spiking Neural Network Model of Depth from Defocus for Event-based Neuromorphic Vision |
title_short | A Spiking Neural Network Model of Depth from Defocus for Event-based Neuromorphic Vision |
title_sort | spiking neural network model of depth from defocus for event-based neuromorphic vision |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6403400/ https://www.ncbi.nlm.nih.gov/pubmed/30842458 http://dx.doi.org/10.1038/s41598-019-40064-0 |
work_keys_str_mv | AT haessiggermain aspikingneuralnetworkmodelofdepthfromdefocusforeventbasedneuromorphicvision AT berthelonxavier aspikingneuralnetworkmodelofdepthfromdefocusforeventbasedneuromorphicvision AT iengsiohoi aspikingneuralnetworkmodelofdepthfromdefocusforeventbasedneuromorphicvision AT benosmanryad aspikingneuralnetworkmodelofdepthfromdefocusforeventbasedneuromorphicvision AT haessiggermain spikingneuralnetworkmodelofdepthfromdefocusforeventbasedneuromorphicvision AT berthelonxavier spikingneuralnetworkmodelofdepthfromdefocusforeventbasedneuromorphicvision AT iengsiohoi spikingneuralnetworkmodelofdepthfromdefocusforeventbasedneuromorphicvision AT benosmanryad spikingneuralnetworkmodelofdepthfromdefocusforeventbasedneuromorphicvision |