Cargando…
Code Generation in Computational Neuroscience: A Review of Tools and Techniques
Advances in experimental techniques and computational power allowing researchers to gather anatomical and electrophysiological data at unprecedented levels of detail have fostered the development of increasingly complex models in computational neuroscience. Large-scale, biophysically detailed cell m...
Autores principales: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6230720/ https://www.ncbi.nlm.nih.gov/pubmed/30455637 http://dx.doi.org/10.3389/fninf.2018.00068 |
_version_ | 1783370128789864448 |
---|---|
author | Blundell, Inga Brette, Romain Cleland, Thomas A. Close, Thomas G. Coca, Daniel Davison, Andrew P. Diaz-Pier, Sandra Fernandez Musoles, Carlos Gleeson, Padraig Goodman, Dan F. M. Hines, Michael Hopkins, Michael W. Kumbhar, Pramod Lester, David R. Marin, Bóris Morrison, Abigail Müller, Eric Nowotny, Thomas Peyser, Alexander Plotnikov, Dimitri Richmond, Paul Rowley, Andrew Rumpe, Bernhard Stimberg, Marcel Stokes, Alan B. Tomkins, Adam Trensch, Guido Woodman, Marmaduke Eppler, Jochen Martin |
author_facet | Blundell, Inga Brette, Romain Cleland, Thomas A. Close, Thomas G. Coca, Daniel Davison, Andrew P. Diaz-Pier, Sandra Fernandez Musoles, Carlos Gleeson, Padraig Goodman, Dan F. M. Hines, Michael Hopkins, Michael W. Kumbhar, Pramod Lester, David R. Marin, Bóris Morrison, Abigail Müller, Eric Nowotny, Thomas Peyser, Alexander Plotnikov, Dimitri Richmond, Paul Rowley, Andrew Rumpe, Bernhard Stimberg, Marcel Stokes, Alan B. Tomkins, Adam Trensch, Guido Woodman, Marmaduke Eppler, Jochen Martin |
author_sort | Blundell, Inga |
collection | PubMed |
description | Advances in experimental techniques and computational power allowing researchers to gather anatomical and electrophysiological data at unprecedented levels of detail have fostered the development of increasingly complex models in computational neuroscience. Large-scale, biophysically detailed cell models pose a particular set of computational challenges, and this has led to the development of a number of domain-specific simulators. At the other level of detail, the ever growing variety of point neuron models increases the implementation barrier even for those based on the relatively simple integrate-and-fire neuron model. Independently of the model complexity, all modeling methods crucially depend on an efficient and accurate transformation of mathematical model descriptions into efficiently executable code. Neuroscientists usually publish model descriptions in terms of the mathematical equations underlying them. However, actually simulating them requires they be translated into code. This can cause problems because errors may be introduced if this process is carried out by hand, and code written by neuroscientists may not be very computationally efficient. Furthermore, the translated code might be generated for different hardware platforms, operating system variants or even written in different languages and thus cannot easily be combined or even compared. Two main approaches to addressing this issues have been followed. The first is to limit users to a fixed set of optimized models, which limits flexibility. The second is to allow model definitions in a high level interpreted language, although this may limit performance. Recently, a third approach has become increasingly popular: using code generation to automatically translate high level descriptions into efficient low level code to combine the best of previous approaches. This approach also greatly enriches efforts to standardize simulator-independent model description languages. In the past few years, a number of code generation pipelines have been developed in the computational neuroscience community, which differ considerably in aim, scope and functionality. This article provides an overview of existing pipelines currently used within the community and contrasts their capabilities and the technologies and concepts behind them. |
format | Online Article Text |
id | pubmed-6230720 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-62307202018-11-19 Code Generation in Computational Neuroscience: A Review of Tools and Techniques Blundell, Inga Brette, Romain Cleland, Thomas A. Close, Thomas G. Coca, Daniel Davison, Andrew P. Diaz-Pier, Sandra Fernandez Musoles, Carlos Gleeson, Padraig Goodman, Dan F. M. Hines, Michael Hopkins, Michael W. Kumbhar, Pramod Lester, David R. Marin, Bóris Morrison, Abigail Müller, Eric Nowotny, Thomas Peyser, Alexander Plotnikov, Dimitri Richmond, Paul Rowley, Andrew Rumpe, Bernhard Stimberg, Marcel Stokes, Alan B. Tomkins, Adam Trensch, Guido Woodman, Marmaduke Eppler, Jochen Martin Front Neuroinform Neuroscience Advances in experimental techniques and computational power allowing researchers to gather anatomical and electrophysiological data at unprecedented levels of detail have fostered the development of increasingly complex models in computational neuroscience. Large-scale, biophysically detailed cell models pose a particular set of computational challenges, and this has led to the development of a number of domain-specific simulators. At the other level of detail, the ever growing variety of point neuron models increases the implementation barrier even for those based on the relatively simple integrate-and-fire neuron model. Independently of the model complexity, all modeling methods crucially depend on an efficient and accurate transformation of mathematical model descriptions into efficiently executable code. Neuroscientists usually publish model descriptions in terms of the mathematical equations underlying them. However, actually simulating them requires they be translated into code. This can cause problems because errors may be introduced if this process is carried out by hand, and code written by neuroscientists may not be very computationally efficient. Furthermore, the translated code might be generated for different hardware platforms, operating system variants or even written in different languages and thus cannot easily be combined or even compared. Two main approaches to addressing this issues have been followed. The first is to limit users to a fixed set of optimized models, which limits flexibility. The second is to allow model definitions in a high level interpreted language, although this may limit performance. Recently, a third approach has become increasingly popular: using code generation to automatically translate high level descriptions into efficient low level code to combine the best of previous approaches. This approach also greatly enriches efforts to standardize simulator-independent model description languages. In the past few years, a number of code generation pipelines have been developed in the computational neuroscience community, which differ considerably in aim, scope and functionality. This article provides an overview of existing pipelines currently used within the community and contrasts their capabilities and the technologies and concepts behind them. Frontiers Media S.A. 2018-11-05 /pmc/articles/PMC6230720/ /pubmed/30455637 http://dx.doi.org/10.3389/fninf.2018.00068 Text en Copyright © 2018 Blundell, Brette, Cleland, Close, Coca, Davison, Diaz-Pier, Fernandez Musoles, Gleeson, Goodman, Hines, Hopkins, Kumbhar, Lester, Marin, Morrison, Müller, Nowotny, Peyser, Plotnikov, Richmond, Rowley, Rumpe, Stimberg, Stokes, Tomkins, Trensch, Woodman and Eppler. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Blundell, Inga Brette, Romain Cleland, Thomas A. Close, Thomas G. Coca, Daniel Davison, Andrew P. Diaz-Pier, Sandra Fernandez Musoles, Carlos Gleeson, Padraig Goodman, Dan F. M. Hines, Michael Hopkins, Michael W. Kumbhar, Pramod Lester, David R. Marin, Bóris Morrison, Abigail Müller, Eric Nowotny, Thomas Peyser, Alexander Plotnikov, Dimitri Richmond, Paul Rowley, Andrew Rumpe, Bernhard Stimberg, Marcel Stokes, Alan B. Tomkins, Adam Trensch, Guido Woodman, Marmaduke Eppler, Jochen Martin Code Generation in Computational Neuroscience: A Review of Tools and Techniques |
title | Code Generation in Computational Neuroscience: A Review of Tools and Techniques |
title_full | Code Generation in Computational Neuroscience: A Review of Tools and Techniques |
title_fullStr | Code Generation in Computational Neuroscience: A Review of Tools and Techniques |
title_full_unstemmed | Code Generation in Computational Neuroscience: A Review of Tools and Techniques |
title_short | Code Generation in Computational Neuroscience: A Review of Tools and Techniques |
title_sort | code generation in computational neuroscience: a review of tools and techniques |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6230720/ https://www.ncbi.nlm.nih.gov/pubmed/30455637 http://dx.doi.org/10.3389/fninf.2018.00068 |
work_keys_str_mv | AT blundellinga codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT bretteromain codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT clelandthomasa codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT closethomasg codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT cocadaniel codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT davisonandrewp codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT diazpiersandra codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT fernandezmusolescarlos codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT gleesonpadraig codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT goodmandanfm codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT hinesmichael codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT hopkinsmichaelw codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT kumbharpramod codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT lesterdavidr codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT marinboris codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT morrisonabigail codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT mullereric codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT nowotnythomas codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT peyseralexander codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT plotnikovdimitri codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT richmondpaul codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT rowleyandrew codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT rumpebernhard codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT stimbergmarcel codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT stokesalanb codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT tomkinsadam codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT trenschguido codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT woodmanmarmaduke codegenerationincomputationalneuroscienceareviewoftoolsandtechniques AT epplerjochenmartin codegenerationincomputationalneuroscienceareviewoftoolsandtechniques |