Cargando…
Inferring neural circuit structure from datasets of heterogeneous tuning curves
Tuning curves characterizing the response selectivities of biological neurons can exhibit large degrees of irregularity and diversity across neurons. Theoretical network models that feature heterogeneous cell populations or partially random connectivity also give rise to diverse tuning curves. Empir...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6493775/ https://www.ncbi.nlm.nih.gov/pubmed/31002660 http://dx.doi.org/10.1371/journal.pcbi.1006816 |
_version_ | 1783415220660600832 |
---|---|
author | Arakaki, Takafumi Barello, G. Ahmadian, Yashar |
author_facet | Arakaki, Takafumi Barello, G. Ahmadian, Yashar |
author_sort | Arakaki, Takafumi |
collection | PubMed |
description | Tuning curves characterizing the response selectivities of biological neurons can exhibit large degrees of irregularity and diversity across neurons. Theoretical network models that feature heterogeneous cell populations or partially random connectivity also give rise to diverse tuning curves. Empirical tuning curve distributions can thus be utilized to make model-based inferences about the statistics of single-cell parameters and network connectivity. However, a general framework for such an inference or fitting procedure is lacking. We address this problem by proposing to view mechanistic network models as implicit generative models whose parameters can be optimized to fit the distribution of experimentally measured tuning curves. A major obstacle for fitting such models is that their likelihood function is not explicitly available or is highly intractable. Recent advances in machine learning provide ways for fitting implicit generative models without the need to evaluate the likelihood and its gradient. Generative Adversarial Networks (GANs) provide one such framework which has been successful in traditional machine learning tasks. We apply this approach in two separate experiments, showing how GANs can be used to fit commonly used mechanistic circuit models in theoretical neuroscience to datasets of tuning curves. This fitting procedure avoids the computationally expensive step of inferring latent variables, such as the biophysical parameters of, or synaptic connections between, particular recorded cells. Instead, it directly learns generalizable model parameters characterizing the network’s statistical structure such as the statistics of strength and spatial range of connections between different cell types. Another strength of this approach is that it fits the joint high-dimensional distribution of tuning curves, instead of matching a few summary statistics picked a priori by the user, resulting in a more accurate inference of circuit properties. More generally, this framework opens the door to direct model-based inference of circuit structure from data beyond single-cell tuning curves, such as simultaneous population recordings. |
format | Online Article Text |
id | pubmed-6493775 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-64937752019-05-17 Inferring neural circuit structure from datasets of heterogeneous tuning curves Arakaki, Takafumi Barello, G. Ahmadian, Yashar PLoS Comput Biol Research Article Tuning curves characterizing the response selectivities of biological neurons can exhibit large degrees of irregularity and diversity across neurons. Theoretical network models that feature heterogeneous cell populations or partially random connectivity also give rise to diverse tuning curves. Empirical tuning curve distributions can thus be utilized to make model-based inferences about the statistics of single-cell parameters and network connectivity. However, a general framework for such an inference or fitting procedure is lacking. We address this problem by proposing to view mechanistic network models as implicit generative models whose parameters can be optimized to fit the distribution of experimentally measured tuning curves. A major obstacle for fitting such models is that their likelihood function is not explicitly available or is highly intractable. Recent advances in machine learning provide ways for fitting implicit generative models without the need to evaluate the likelihood and its gradient. Generative Adversarial Networks (GANs) provide one such framework which has been successful in traditional machine learning tasks. We apply this approach in two separate experiments, showing how GANs can be used to fit commonly used mechanistic circuit models in theoretical neuroscience to datasets of tuning curves. This fitting procedure avoids the computationally expensive step of inferring latent variables, such as the biophysical parameters of, or synaptic connections between, particular recorded cells. Instead, it directly learns generalizable model parameters characterizing the network’s statistical structure such as the statistics of strength and spatial range of connections between different cell types. Another strength of this approach is that it fits the joint high-dimensional distribution of tuning curves, instead of matching a few summary statistics picked a priori by the user, resulting in a more accurate inference of circuit properties. More generally, this framework opens the door to direct model-based inference of circuit structure from data beyond single-cell tuning curves, such as simultaneous population recordings. Public Library of Science 2019-04-19 /pmc/articles/PMC6493775/ /pubmed/31002660 http://dx.doi.org/10.1371/journal.pcbi.1006816 Text en © 2019 Arakaki et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Arakaki, Takafumi Barello, G. Ahmadian, Yashar Inferring neural circuit structure from datasets of heterogeneous tuning curves |
title | Inferring neural circuit structure from datasets of heterogeneous tuning curves |
title_full | Inferring neural circuit structure from datasets of heterogeneous tuning curves |
title_fullStr | Inferring neural circuit structure from datasets of heterogeneous tuning curves |
title_full_unstemmed | Inferring neural circuit structure from datasets of heterogeneous tuning curves |
title_short | Inferring neural circuit structure from datasets of heterogeneous tuning curves |
title_sort | inferring neural circuit structure from datasets of heterogeneous tuning curves |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6493775/ https://www.ncbi.nlm.nih.gov/pubmed/31002660 http://dx.doi.org/10.1371/journal.pcbi.1006816 |
work_keys_str_mv | AT arakakitakafumi inferringneuralcircuitstructurefromdatasetsofheterogeneoustuningcurves AT barellog inferringneuralcircuitstructurefromdatasetsofheterogeneoustuningcurves AT ahmadianyashar inferringneuralcircuitstructurefromdatasetsofheterogeneoustuningcurves |