Cargando…

RateML: A Code Generation Tool for Brain Network Models

Whole brain network models are now an established tool in scientific and clinical research, however their use in a larger workflow still adds significant informatics complexity. We propose a tool, RateML, that enables users to generate such models from a succinct declarative description, in which th...

Descripción completa

Detalles Bibliográficos
Autores principales: van der Vlag, Michiel, Woodman, Marmaduke, Fousek, Jan, Diaz-Pier, Sandra, Pérez Martín, Aarón, Jirsa , Viktor, Morrison, Abigail
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10013028/
https://www.ncbi.nlm.nih.gov/pubmed/36926112
http://dx.doi.org/10.3389/fnetp.2022.826345
_version_ 1784906732236963840
author van der Vlag, Michiel
Woodman, Marmaduke
Fousek, Jan
Diaz-Pier, Sandra
Pérez Martín, Aarón
Jirsa , Viktor
Morrison, Abigail
author_facet van der Vlag, Michiel
Woodman, Marmaduke
Fousek, Jan
Diaz-Pier, Sandra
Pérez Martín, Aarón
Jirsa , Viktor
Morrison, Abigail
author_sort van der Vlag, Michiel
collection PubMed
description Whole brain network models are now an established tool in scientific and clinical research, however their use in a larger workflow still adds significant informatics complexity. We propose a tool, RateML, that enables users to generate such models from a succinct declarative description, in which the mathematics of the model are described without specifying how their simulation should be implemented. RateML builds on NeuroML’s Low Entropy Model Specification (LEMS), an XML based language for specifying models of dynamical systems, allowing descriptions of neural mass and discretized neural field models, as implemented by the Virtual Brain (TVB) simulator: the end user describes their model’s mathematics once and generates and runs code for different languages, targeting both CPUs for fast single simulations and GPUs for parallel ensemble simulations. High performance parallel simulations are crucial for tuning many parameters of a model to empirical data such as functional magnetic resonance imaging (fMRI), with reasonable execution times on small or modest hardware resources. Specifically, while RateML can generate Python model code, it enables generation of Compute Unified Device Architecture C++ code for NVIDIA GPUs. When a CUDA implementation of a model is generated, a tailored model driver class is produced, enabling the user to tweak the driver by hand and perform the parameter sweep. The model and driver can be executed on any compute capable NVIDIA GPU with a high degree of parallelization, either locally or in a compute cluster environment. The results reported in this manuscript show that with the CUDA code generated by RateML, it is possible to explore thousands of parameter combinations with a single Graphics Processing Unit for different models, substantially reducing parameter exploration times and resource usage for the brain network models, in turn accelerating the research workflow itself. This provides a new tool to create efficient and broader parameter fitting workflows, support studies on larger cohorts, and derive more robust and statistically relevant conclusions about brain dynamics.
format Online
Article
Text
id pubmed-10013028
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-100130282023-03-15 RateML: A Code Generation Tool for Brain Network Models van der Vlag, Michiel Woodman, Marmaduke Fousek, Jan Diaz-Pier, Sandra Pérez Martín, Aarón Jirsa , Viktor Morrison, Abigail Front Netw Physiol Network Physiology Whole brain network models are now an established tool in scientific and clinical research, however their use in a larger workflow still adds significant informatics complexity. We propose a tool, RateML, that enables users to generate such models from a succinct declarative description, in which the mathematics of the model are described without specifying how their simulation should be implemented. RateML builds on NeuroML’s Low Entropy Model Specification (LEMS), an XML based language for specifying models of dynamical systems, allowing descriptions of neural mass and discretized neural field models, as implemented by the Virtual Brain (TVB) simulator: the end user describes their model’s mathematics once and generates and runs code for different languages, targeting both CPUs for fast single simulations and GPUs for parallel ensemble simulations. High performance parallel simulations are crucial for tuning many parameters of a model to empirical data such as functional magnetic resonance imaging (fMRI), with reasonable execution times on small or modest hardware resources. Specifically, while RateML can generate Python model code, it enables generation of Compute Unified Device Architecture C++ code for NVIDIA GPUs. When a CUDA implementation of a model is generated, a tailored model driver class is produced, enabling the user to tweak the driver by hand and perform the parameter sweep. The model and driver can be executed on any compute capable NVIDIA GPU with a high degree of parallelization, either locally or in a compute cluster environment. The results reported in this manuscript show that with the CUDA code generated by RateML, it is possible to explore thousands of parameter combinations with a single Graphics Processing Unit for different models, substantially reducing parameter exploration times and resource usage for the brain network models, in turn accelerating the research workflow itself. This provides a new tool to create efficient and broader parameter fitting workflows, support studies on larger cohorts, and derive more robust and statistically relevant conclusions about brain dynamics. Frontiers Media S.A. 2022-02-14 /pmc/articles/PMC10013028/ /pubmed/36926112 http://dx.doi.org/10.3389/fnetp.2022.826345 Text en Copyright © 2022 van der Vlag, Woodman, Fousek, Diaz-Pier, Pérez Martín, Jirsa  and Morrison. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Network Physiology
van der Vlag, Michiel
Woodman, Marmaduke
Fousek, Jan
Diaz-Pier, Sandra
Pérez Martín, Aarón
Jirsa , Viktor
Morrison, Abigail
RateML: A Code Generation Tool for Brain Network Models
title RateML: A Code Generation Tool for Brain Network Models
title_full RateML: A Code Generation Tool for Brain Network Models
title_fullStr RateML: A Code Generation Tool for Brain Network Models
title_full_unstemmed RateML: A Code Generation Tool for Brain Network Models
title_short RateML: A Code Generation Tool for Brain Network Models
title_sort rateml: a code generation tool for brain network models
topic Network Physiology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10013028/
https://www.ncbi.nlm.nih.gov/pubmed/36926112
http://dx.doi.org/10.3389/fnetp.2022.826345
work_keys_str_mv AT vandervlagmichiel ratemlacodegenerationtoolforbrainnetworkmodels
AT woodmanmarmaduke ratemlacodegenerationtoolforbrainnetworkmodels
AT fousekjan ratemlacodegenerationtoolforbrainnetworkmodels
AT diazpiersandra ratemlacodegenerationtoolforbrainnetworkmodels
AT perezmartinaaron ratemlacodegenerationtoolforbrainnetworkmodels
AT jirsaviktor ratemlacodegenerationtoolforbrainnetworkmodels
AT morrisonabigail ratemlacodegenerationtoolforbrainnetworkmodels