Cargando…

Neuromorphic Hardware Learns to Learn

Hyperparameters and learning algorithms for neuromorphic hardware are usually chosen by hand to suit a particular task. In contrast, networks of neurons in the brain were optimized through extensive evolutionary and developmental processes to work well on a range of computing and learning tasks. Occ...

Descripción completa

Detalles Bibliográficos
Autores principales: Bohnstingl, Thomas, Scherr, Franz, Pehle, Christian, Meier, Karlheinz, Maass, Wolfgang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6536858/
https://www.ncbi.nlm.nih.gov/pubmed/31178681
http://dx.doi.org/10.3389/fnins.2019.00483
_version_ 1783421863081279488
author Bohnstingl, Thomas
Scherr, Franz
Pehle, Christian
Meier, Karlheinz
Maass, Wolfgang
author_facet Bohnstingl, Thomas
Scherr, Franz
Pehle, Christian
Meier, Karlheinz
Maass, Wolfgang
author_sort Bohnstingl, Thomas
collection PubMed
description Hyperparameters and learning algorithms for neuromorphic hardware are usually chosen by hand to suit a particular task. In contrast, networks of neurons in the brain were optimized through extensive evolutionary and developmental processes to work well on a range of computing and learning tasks. Occasionally this process has been emulated through genetic algorithms, but these require themselves hand-design of their details and tend to provide a limited range of improvements. We employ instead other powerful gradient-free optimization tools, such as cross-entropy methods and evolutionary strategies, in order to port the function of biological optimization processes to neuromorphic hardware. As an example, we show these optimization algorithms enable neuromorphic agents to learn very efficiently from rewards. In particular, meta-plasticity, i.e., the optimization of the learning rule which they use, substantially enhances reward-based learning capability of the hardware. In addition, we demonstrate for the first time Learning-to-Learn benefits from such hardware, in particular, the capability to extract abstract knowledge from prior learning experiences that speeds up the learning of new but related tasks. Learning-to-Learn is especially suited for accelerated neuromorphic hardware, since it makes it feasible to carry out the required very large number of network computations.
format Online
Article
Text
id pubmed-6536858
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-65368582019-06-07 Neuromorphic Hardware Learns to Learn Bohnstingl, Thomas Scherr, Franz Pehle, Christian Meier, Karlheinz Maass, Wolfgang Front Neurosci Neuroscience Hyperparameters and learning algorithms for neuromorphic hardware are usually chosen by hand to suit a particular task. In contrast, networks of neurons in the brain were optimized through extensive evolutionary and developmental processes to work well on a range of computing and learning tasks. Occasionally this process has been emulated through genetic algorithms, but these require themselves hand-design of their details and tend to provide a limited range of improvements. We employ instead other powerful gradient-free optimization tools, such as cross-entropy methods and evolutionary strategies, in order to port the function of biological optimization processes to neuromorphic hardware. As an example, we show these optimization algorithms enable neuromorphic agents to learn very efficiently from rewards. In particular, meta-plasticity, i.e., the optimization of the learning rule which they use, substantially enhances reward-based learning capability of the hardware. In addition, we demonstrate for the first time Learning-to-Learn benefits from such hardware, in particular, the capability to extract abstract knowledge from prior learning experiences that speeds up the learning of new but related tasks. Learning-to-Learn is especially suited for accelerated neuromorphic hardware, since it makes it feasible to carry out the required very large number of network computations. Frontiers Media S.A. 2019-05-21 /pmc/articles/PMC6536858/ /pubmed/31178681 http://dx.doi.org/10.3389/fnins.2019.00483 Text en Copyright © 2019 Bohnstingl, Scherr, Pehle, Meier and Maass. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Bohnstingl, Thomas
Scherr, Franz
Pehle, Christian
Meier, Karlheinz
Maass, Wolfgang
Neuromorphic Hardware Learns to Learn
title Neuromorphic Hardware Learns to Learn
title_full Neuromorphic Hardware Learns to Learn
title_fullStr Neuromorphic Hardware Learns to Learn
title_full_unstemmed Neuromorphic Hardware Learns to Learn
title_short Neuromorphic Hardware Learns to Learn
title_sort neuromorphic hardware learns to learn
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6536858/
https://www.ncbi.nlm.nih.gov/pubmed/31178681
http://dx.doi.org/10.3389/fnins.2019.00483
work_keys_str_mv AT bohnstinglthomas neuromorphichardwarelearnstolearn
AT scherrfranz neuromorphichardwarelearnstolearn
AT pehlechristian neuromorphichardwarelearnstolearn
AT meierkarlheinz neuromorphichardwarelearnstolearn
AT maasswolfgang neuromorphichardwarelearnstolearn