Cargando…

Spatiotemporal features for asynchronous event-based data

Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing. These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas. They are event-driven and fully asynchronous, thereby reducing r...

Descripción completa

Detalles Bibliográficos
Autores principales: Lagorce, Xavier, Ieng, Sio-Hoi, Clady, Xavier, Pfeiffer, Michael, Benosman, Ryad B.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4338664/
https://www.ncbi.nlm.nih.gov/pubmed/25759637
http://dx.doi.org/10.3389/fnins.2015.00046
_version_ 1782481248476725248
author Lagorce, Xavier
Ieng, Sio-Hoi
Clady, Xavier
Pfeiffer, Michael
Benosman, Ryad B.
author_facet Lagorce, Xavier
Ieng, Sio-Hoi
Clady, Xavier
Pfeiffer, Michael
Benosman, Ryad B.
author_sort Lagorce, Xavier
collection PubMed
description Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing. These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas. They are event-driven and fully asynchronous, thereby reducing redundancy and encoding exact times of input signal changes, leading to a very precise temporal resolution. Approaches for higher-level computer vision often rely on the reliable detection of features in visual frames, but similar definitions of features for the novel dynamic and event-based visual input representation of silicon retinas have so far been lacking. This article addresses the problem of learning and recognizing features for event-based vision sensors, which capture properties of truly spatiotemporal volumes of sparse visual event information. A novel computational architecture for learning and encoding spatiotemporal features is introduced based on a set of predictive recurrent reservoir networks, competing via winner-take-all selection. Features are learned in an unsupervised manner from real-world input recorded with event-based vision sensors. It is shown that the networks in the architecture learn distinct and task-specific dynamic visual features, and can predict their trajectories over time.
format Online
Article
Text
id pubmed-4338664
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-43386642015-03-10 Spatiotemporal features for asynchronous event-based data Lagorce, Xavier Ieng, Sio-Hoi Clady, Xavier Pfeiffer, Michael Benosman, Ryad B. Front Neurosci Neuroscience Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing. These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas. They are event-driven and fully asynchronous, thereby reducing redundancy and encoding exact times of input signal changes, leading to a very precise temporal resolution. Approaches for higher-level computer vision often rely on the reliable detection of features in visual frames, but similar definitions of features for the novel dynamic and event-based visual input representation of silicon retinas have so far been lacking. This article addresses the problem of learning and recognizing features for event-based vision sensors, which capture properties of truly spatiotemporal volumes of sparse visual event information. A novel computational architecture for learning and encoding spatiotemporal features is introduced based on a set of predictive recurrent reservoir networks, competing via winner-take-all selection. Features are learned in an unsupervised manner from real-world input recorded with event-based vision sensors. It is shown that the networks in the architecture learn distinct and task-specific dynamic visual features, and can predict their trajectories over time. Frontiers Media S.A. 2015-02-24 /pmc/articles/PMC4338664/ /pubmed/25759637 http://dx.doi.org/10.3389/fnins.2015.00046 Text en Copyright © 2015 Lagorce, Ieng, Clady, Pfeiffer and Benosman. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Lagorce, Xavier
Ieng, Sio-Hoi
Clady, Xavier
Pfeiffer, Michael
Benosman, Ryad B.
Spatiotemporal features for asynchronous event-based data
title Spatiotemporal features for asynchronous event-based data
title_full Spatiotemporal features for asynchronous event-based data
title_fullStr Spatiotemporal features for asynchronous event-based data
title_full_unstemmed Spatiotemporal features for asynchronous event-based data
title_short Spatiotemporal features for asynchronous event-based data
title_sort spatiotemporal features for asynchronous event-based data
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4338664/
https://www.ncbi.nlm.nih.gov/pubmed/25759637
http://dx.doi.org/10.3389/fnins.2015.00046
work_keys_str_mv AT lagorcexavier spatiotemporalfeaturesforasynchronouseventbaseddata
AT iengsiohoi spatiotemporalfeaturesforasynchronouseventbaseddata
AT cladyxavier spatiotemporalfeaturesforasynchronouseventbaseddata
AT pfeiffermichael spatiotemporalfeaturesforasynchronouseventbaseddata
AT benosmanryadb spatiotemporalfeaturesforasynchronouseventbaseddata