Cargando…

A Virtual Retina for Studying Population Coding

At every level of the visual system – from retina to cortex – information is encoded in the activity of large populations of cells. The populations are not uniform, but contain many different types of cells, each with its own sensitivities to visual stimuli. Understanding the roles of the cell types...

Descripción completa

Detalles Bibliográficos
Autores principales: Bomash, Illya, Roudi, Yasser, Nirenberg, Sheila
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2013
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3544815/
https://www.ncbi.nlm.nih.gov/pubmed/23341940
http://dx.doi.org/10.1371/journal.pone.0053363
_version_ 1782255855255683072
author Bomash, Illya
Roudi, Yasser
Nirenberg, Sheila
author_facet Bomash, Illya
Roudi, Yasser
Nirenberg, Sheila
author_sort Bomash, Illya
collection PubMed
description At every level of the visual system – from retina to cortex – information is encoded in the activity of large populations of cells. The populations are not uniform, but contain many different types of cells, each with its own sensitivities to visual stimuli. Understanding the roles of the cell types and how they work together to form collective representations has been a long-standing goal. This goal, though, has been difficult to advance, and, to a large extent, the reason is data limitation. Large numbers of stimulus/response relationships need to be explored, and obtaining enough data to examine even a fraction of them requires a great deal of experiments and animals. Here we describe a tool for addressing this, specifically, at the level of the retina. The tool is a data-driven model of retinal input/output relationships that is effective on a broad range of stimuli – essentially, a virtual retina. The results show that it is highly reliable: (1) the model cells carry the same amount of information as their real cell counterparts, (2) the quality of the information is the same – that is, the posterior stimulus distributions produced by the model cells closely match those of their real cell counterparts, and (3) the model cells are able to make very reliable predictions about the functions of the different retinal output cell types, as measured using Bayesian decoding (electrophysiology) and optomotor performance (behavior). In sum, we present a new tool for studying population coding and test it experimentally. It provides a way to rapidly probe the actions of different cell classes and develop testable predictions. The overall aim is to build constrained theories about population coding and keep the number of experiments and animals to a minimum.
format Online
Article
Text
id pubmed-3544815
institution National Center for Biotechnology Information
language English
publishDate 2013
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-35448152013-01-22 A Virtual Retina for Studying Population Coding Bomash, Illya Roudi, Yasser Nirenberg, Sheila PLoS One Research Article At every level of the visual system – from retina to cortex – information is encoded in the activity of large populations of cells. The populations are not uniform, but contain many different types of cells, each with its own sensitivities to visual stimuli. Understanding the roles of the cell types and how they work together to form collective representations has been a long-standing goal. This goal, though, has been difficult to advance, and, to a large extent, the reason is data limitation. Large numbers of stimulus/response relationships need to be explored, and obtaining enough data to examine even a fraction of them requires a great deal of experiments and animals. Here we describe a tool for addressing this, specifically, at the level of the retina. The tool is a data-driven model of retinal input/output relationships that is effective on a broad range of stimuli – essentially, a virtual retina. The results show that it is highly reliable: (1) the model cells carry the same amount of information as their real cell counterparts, (2) the quality of the information is the same – that is, the posterior stimulus distributions produced by the model cells closely match those of their real cell counterparts, and (3) the model cells are able to make very reliable predictions about the functions of the different retinal output cell types, as measured using Bayesian decoding (electrophysiology) and optomotor performance (behavior). In sum, we present a new tool for studying population coding and test it experimentally. It provides a way to rapidly probe the actions of different cell classes and develop testable predictions. The overall aim is to build constrained theories about population coding and keep the number of experiments and animals to a minimum. Public Library of Science 2013-01-14 /pmc/articles/PMC3544815/ /pubmed/23341940 http://dx.doi.org/10.1371/journal.pone.0053363 Text en © 2013 Bomash et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
Bomash, Illya
Roudi, Yasser
Nirenberg, Sheila
A Virtual Retina for Studying Population Coding
title A Virtual Retina for Studying Population Coding
title_full A Virtual Retina for Studying Population Coding
title_fullStr A Virtual Retina for Studying Population Coding
title_full_unstemmed A Virtual Retina for Studying Population Coding
title_short A Virtual Retina for Studying Population Coding
title_sort virtual retina for studying population coding
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3544815/
https://www.ncbi.nlm.nih.gov/pubmed/23341940
http://dx.doi.org/10.1371/journal.pone.0053363
work_keys_str_mv AT bomashillya avirtualretinaforstudyingpopulationcoding
AT roudiyasser avirtualretinaforstudyingpopulationcoding
AT nirenbergsheila avirtualretinaforstudyingpopulationcoding
AT bomashillya virtualretinaforstudyingpopulationcoding
AT roudiyasser virtualretinaforstudyingpopulationcoding
AT nirenbergsheila virtualretinaforstudyingpopulationcoding