Cargando…
Computing Complex Visual Features with Retinal Spike Times
Neurons in sensory systems can represent information not only by their firing rate, but also by the precise timing of individual spikes. For example, certain retinal ganglion cells, first identified in the salamander, encode the spatial structure of a new image by their first-spike latencies. Here w...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2013
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3534662/ https://www.ncbi.nlm.nih.gov/pubmed/23301021 http://dx.doi.org/10.1371/journal.pone.0053063 |
Sumario: | Neurons in sensory systems can represent information not only by their firing rate, but also by the precise timing of individual spikes. For example, certain retinal ganglion cells, first identified in the salamander, encode the spatial structure of a new image by their first-spike latencies. Here we explore how this temporal code can be used by downstream neural circuits for computing complex features of the image that are not available from the signals of individual ganglion cells. To this end, we feed the experimentally observed spike trains from a population of retinal ganglion cells to an integrate-and-fire model of post-synaptic integration. The synaptic weights of this integration are tuned according to the recently introduced tempotron learning rule. We find that this model neuron can perform complex visual detection tasks in a single synaptic stage that would require multiple stages for neurons operating instead on neural spike counts. Furthermore, the model computes rapidly, using only a single spike per afferent, and can signal its decision in turn by just a single spike. Extending these analyses to large ensembles of simulated retinal signals, we show that the model can detect the orientation of a visual pattern independent of its phase, an operation thought to be one of the primitives in early visual processing. We analyze how these computations work and compare the performance of this model to other schemes for reading out spike-timing information. These results demonstrate that the retina formats spatial information into temporal spike sequences in a way that favors computation in the time domain. Moreover, complex image analysis can be achieved already by a simple integrate-and-fire model neuron, emphasizing the power and plausibility of rapid neural computing with spike times. |
---|