Cargando…

Efficient gradient computation for dynamical models

Data assimilation is a fundamental issue that arises across many scales in neuroscience — ranging from the study of single neurons using single electrode recordings to the interaction of thousands of neurons using fMRI. Data assimilation involves inverting a generative model that can not only explai...

Descripción completa

Detalles Bibliográficos
Autores principales: Sengupta, B., Friston, K.J., Penny, W.D.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Academic Press 2014
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4120812/
https://www.ncbi.nlm.nih.gov/pubmed/24769182
http://dx.doi.org/10.1016/j.neuroimage.2014.04.040
_version_ 1782329146221789184
author Sengupta, B.
Friston, K.J.
Penny, W.D.
author_facet Sengupta, B.
Friston, K.J.
Penny, W.D.
author_sort Sengupta, B.
collection PubMed
description Data assimilation is a fundamental issue that arises across many scales in neuroscience — ranging from the study of single neurons using single electrode recordings to the interaction of thousands of neurons using fMRI. Data assimilation involves inverting a generative model that can not only explain observed data but also generate predictions. Typically, the model is inverted or fitted using conventional tools of (convex) optimization that invariably extremise some functional — norms, minimum descriptive length, variational free energy, etc. Generally, optimisation rests on evaluating the local gradients of the functional to be optimized. In this paper, we compare three different gradient estimation techniques that could be used for extremising any functional in time — (i) finite differences, (ii) forward sensitivities and a method based on (iii) the adjoint of the dynamical system. We demonstrate that the first-order gradients of a dynamical system, linear or non-linear, can be computed most efficiently using the adjoint method. This is particularly true for systems where the number of parameters is greater than the number of states. For such systems, integrating several sensitivity equations – as required with forward sensitivities – proves to be most expensive, while finite-difference approximations have an intermediate efficiency. In the context of neuroimaging, adjoint based inversion of dynamical causal models (DCMs) can, in principle, enable the study of models with large numbers of nodes and parameters.
format Online
Article
Text
id pubmed-4120812
institution National Center for Biotechnology Information
language English
publishDate 2014
publisher Academic Press
record_format MEDLINE/PubMed
spelling pubmed-41208122014-09-01 Efficient gradient computation for dynamical models Sengupta, B. Friston, K.J. Penny, W.D. Neuroimage Technical Note Data assimilation is a fundamental issue that arises across many scales in neuroscience — ranging from the study of single neurons using single electrode recordings to the interaction of thousands of neurons using fMRI. Data assimilation involves inverting a generative model that can not only explain observed data but also generate predictions. Typically, the model is inverted or fitted using conventional tools of (convex) optimization that invariably extremise some functional — norms, minimum descriptive length, variational free energy, etc. Generally, optimisation rests on evaluating the local gradients of the functional to be optimized. In this paper, we compare three different gradient estimation techniques that could be used for extremising any functional in time — (i) finite differences, (ii) forward sensitivities and a method based on (iii) the adjoint of the dynamical system. We demonstrate that the first-order gradients of a dynamical system, linear or non-linear, can be computed most efficiently using the adjoint method. This is particularly true for systems where the number of parameters is greater than the number of states. For such systems, integrating several sensitivity equations – as required with forward sensitivities – proves to be most expensive, while finite-difference approximations have an intermediate efficiency. In the context of neuroimaging, adjoint based inversion of dynamical causal models (DCMs) can, in principle, enable the study of models with large numbers of nodes and parameters. Academic Press 2014-09 /pmc/articles/PMC4120812/ /pubmed/24769182 http://dx.doi.org/10.1016/j.neuroimage.2014.04.040 Text en © 2014 The Authors https://creativecommons.org/licenses/by/3.0/This work is licensed under a Creative Commons Attribution 3.0 Unported License (https://creativecommons.org/licenses/by/3.0/) .
spellingShingle Technical Note
Sengupta, B.
Friston, K.J.
Penny, W.D.
Efficient gradient computation for dynamical models
title Efficient gradient computation for dynamical models
title_full Efficient gradient computation for dynamical models
title_fullStr Efficient gradient computation for dynamical models
title_full_unstemmed Efficient gradient computation for dynamical models
title_short Efficient gradient computation for dynamical models
title_sort efficient gradient computation for dynamical models
topic Technical Note
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4120812/
https://www.ncbi.nlm.nih.gov/pubmed/24769182
http://dx.doi.org/10.1016/j.neuroimage.2014.04.040
work_keys_str_mv AT senguptab efficientgradientcomputationfordynamicalmodels
AT fristonkj efficientgradientcomputationfordynamicalmodels
AT pennywd efficientgradientcomputationfordynamicalmodels