Cargando…
Real-time classification and sensor fusion with a spiking deep belief network
Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2013
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3792559/ https://www.ncbi.nlm.nih.gov/pubmed/24115919 http://dx.doi.org/10.3389/fnins.2013.00178 |
_version_ | 1782286863934947328 |
---|---|
author | O'Connor, Peter Neil, Daniel Liu, Shih-Chii Delbruck, Tobi Pfeiffer, Michael |
author_facet | O'Connor, Peter Neil, Daniel Liu, Shih-Chii Delbruck, Tobi Pfeiffer, Michael |
author_sort | O'Connor, Peter |
collection | PubMed |
description | Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input. |
format | Online Article Text |
id | pubmed-3792559 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2013 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-37925592013-10-10 Real-time classification and sensor fusion with a spiking deep belief network O'Connor, Peter Neil, Daniel Liu, Shih-Chii Delbruck, Tobi Pfeiffer, Michael Front Neurosci Neuroscience Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input. Frontiers Media S.A. 2013-10-08 /pmc/articles/PMC3792559/ /pubmed/24115919 http://dx.doi.org/10.3389/fnins.2013.00178 Text en Copyright © 2013 O'Connor, Neil, Liu, Delbruck and Pfeiffer. http://creativecommons.org/licenses/by/3.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience O'Connor, Peter Neil, Daniel Liu, Shih-Chii Delbruck, Tobi Pfeiffer, Michael Real-time classification and sensor fusion with a spiking deep belief network |
title | Real-time classification and sensor fusion with a spiking deep belief network |
title_full | Real-time classification and sensor fusion with a spiking deep belief network |
title_fullStr | Real-time classification and sensor fusion with a spiking deep belief network |
title_full_unstemmed | Real-time classification and sensor fusion with a spiking deep belief network |
title_short | Real-time classification and sensor fusion with a spiking deep belief network |
title_sort | real-time classification and sensor fusion with a spiking deep belief network |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3792559/ https://www.ncbi.nlm.nih.gov/pubmed/24115919 http://dx.doi.org/10.3389/fnins.2013.00178 |
work_keys_str_mv | AT oconnorpeter realtimeclassificationandsensorfusionwithaspikingdeepbeliefnetwork AT neildaniel realtimeclassificationandsensorfusionwithaspikingdeepbeliefnetwork AT liushihchii realtimeclassificationandsensorfusionwithaspikingdeepbeliefnetwork AT delbrucktobi realtimeclassificationandsensorfusionwithaspikingdeepbeliefnetwork AT pfeiffermichael realtimeclassificationandsensorfusionwithaspikingdeepbeliefnetwork |