Cargando…

Adaptive Tuning Curve Widths Improve Sample Efficient Learning

Natural brains perform miraculously well in learning new tasks from a small number of samples, whereas sample efficient learning is still a major open problem in the field of machine learning. Here, we raise the question, how the neural coding scheme affects sample efficiency, and make first progres...

Descripción completa

Detalles Bibliográficos
Autores principales: Meier, Florian, Dang-Nhu, Raphaël, Steger, Angelika
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7041413/
https://www.ncbi.nlm.nih.gov/pubmed/32132915
http://dx.doi.org/10.3389/fncom.2020.00012
_version_ 1783501155299491840
author Meier, Florian
Dang-Nhu, Raphaël
Steger, Angelika
author_facet Meier, Florian
Dang-Nhu, Raphaël
Steger, Angelika
author_sort Meier, Florian
collection PubMed
description Natural brains perform miraculously well in learning new tasks from a small number of samples, whereas sample efficient learning is still a major open problem in the field of machine learning. Here, we raise the question, how the neural coding scheme affects sample efficiency, and make first progress on this question by proposing and analyzing a learning algorithm that uses a simple reinforce-type plasticity mechanism and does not require any gradients to learn low dimensional mappings. It harnesses three bio-plausible mechanisms, namely, population codes with bell shaped tuning curves, continous attractor mechanisms and probabilistic synapses, to achieve sample efficient learning. We show both theoretically and by simulations that population codes with broadly tuned neurons lead to high sample efficiency, whereas codes with sharply tuned neurons account for high final precision. Moreover, a dynamic adaptation of the tuning width during learning gives rise to both, high sample efficiency and high final precision. We prove a sample efficiency guarantee for our algorithm that lies within a logarithmic factor from the information theoretical optimum. Our simulations show that for low dimensional mappings, our learning algorithm achieves comparable sample efficiency to multi-layer perceptrons trained by gradient descent, although it does not use any gradients. Furthermore, it achieves competitive sample efficiency in low dimensional reinforcement learning tasks. From a machine learning perspective, these findings may inspire novel approaches to improve sample efficiency. From a neuroscience perspective, these findings suggest sample efficiency as a yet unstudied functional role of adaptive tuning curve width.
format Online
Article
Text
id pubmed-7041413
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-70414132020-03-04 Adaptive Tuning Curve Widths Improve Sample Efficient Learning Meier, Florian Dang-Nhu, Raphaël Steger, Angelika Front Comput Neurosci Neuroscience Natural brains perform miraculously well in learning new tasks from a small number of samples, whereas sample efficient learning is still a major open problem in the field of machine learning. Here, we raise the question, how the neural coding scheme affects sample efficiency, and make first progress on this question by proposing and analyzing a learning algorithm that uses a simple reinforce-type plasticity mechanism and does not require any gradients to learn low dimensional mappings. It harnesses three bio-plausible mechanisms, namely, population codes with bell shaped tuning curves, continous attractor mechanisms and probabilistic synapses, to achieve sample efficient learning. We show both theoretically and by simulations that population codes with broadly tuned neurons lead to high sample efficiency, whereas codes with sharply tuned neurons account for high final precision. Moreover, a dynamic adaptation of the tuning width during learning gives rise to both, high sample efficiency and high final precision. We prove a sample efficiency guarantee for our algorithm that lies within a logarithmic factor from the information theoretical optimum. Our simulations show that for low dimensional mappings, our learning algorithm achieves comparable sample efficiency to multi-layer perceptrons trained by gradient descent, although it does not use any gradients. Furthermore, it achieves competitive sample efficiency in low dimensional reinforcement learning tasks. From a machine learning perspective, these findings may inspire novel approaches to improve sample efficiency. From a neuroscience perspective, these findings suggest sample efficiency as a yet unstudied functional role of adaptive tuning curve width. Frontiers Media S.A. 2020-02-18 /pmc/articles/PMC7041413/ /pubmed/32132915 http://dx.doi.org/10.3389/fncom.2020.00012 Text en Copyright © 2020 Meier, Dang-Nhu and Steger. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Meier, Florian
Dang-Nhu, Raphaël
Steger, Angelika
Adaptive Tuning Curve Widths Improve Sample Efficient Learning
title Adaptive Tuning Curve Widths Improve Sample Efficient Learning
title_full Adaptive Tuning Curve Widths Improve Sample Efficient Learning
title_fullStr Adaptive Tuning Curve Widths Improve Sample Efficient Learning
title_full_unstemmed Adaptive Tuning Curve Widths Improve Sample Efficient Learning
title_short Adaptive Tuning Curve Widths Improve Sample Efficient Learning
title_sort adaptive tuning curve widths improve sample efficient learning
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7041413/
https://www.ncbi.nlm.nih.gov/pubmed/32132915
http://dx.doi.org/10.3389/fncom.2020.00012
work_keys_str_mv AT meierflorian adaptivetuningcurvewidthsimprovesampleefficientlearning
AT dangnhuraphael adaptivetuningcurvewidthsimprovesampleefficientlearning
AT stegerangelika adaptivetuningcurvewidthsimprovesampleefficientlearning