Cargando…

Why Spiking Neural Networks Are Efficient: A Theorem

Current artificial neural networks are very successful in many machine learning applications, but in some cases they still lag behind human abilities. To improve their performance, a natural idea is to simulate features of biological neurons which are not yet implemented in machine learning. One of...

Descripción completa

Detalles Bibliográficos
Autores principales: Beer, Michael, Urenda, Julio, Kosheleva, Olga, Kreinovich, Vladik
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7274333/
http://dx.doi.org/10.1007/978-3-030-50146-4_5
_version_ 1783542558657347584
author Beer, Michael
Urenda, Julio
Kosheleva, Olga
Kreinovich, Vladik
author_facet Beer, Michael
Urenda, Julio
Kosheleva, Olga
Kreinovich, Vladik
author_sort Beer, Michael
collection PubMed
description Current artificial neural networks are very successful in many machine learning applications, but in some cases they still lag behind human abilities. To improve their performance, a natural idea is to simulate features of biological neurons which are not yet implemented in machine learning. One of such features is the fact that in biological neural networks, signals are represented by a train of spikes. Researchers have tried adding this spikiness to machine learning and indeed got very good results, especially when processing time series (and, more generally, spatio-temporal data). In this paper, we provide a possible theoretical explanation for this empirical success.
format Online
Article
Text
id pubmed-7274333
institution National Center for Biotechnology Information
language English
publishDate 2020
record_format MEDLINE/PubMed
spelling pubmed-72743332020-06-05 Why Spiking Neural Networks Are Efficient: A Theorem Beer, Michael Urenda, Julio Kosheleva, Olga Kreinovich, Vladik Information Processing and Management of Uncertainty in Knowledge-Based Systems Article Current artificial neural networks are very successful in many machine learning applications, but in some cases they still lag behind human abilities. To improve their performance, a natural idea is to simulate features of biological neurons which are not yet implemented in machine learning. One of such features is the fact that in biological neural networks, signals are represented by a train of spikes. Researchers have tried adding this spikiness to machine learning and indeed got very good results, especially when processing time series (and, more generally, spatio-temporal data). In this paper, we provide a possible theoretical explanation for this empirical success. 2020-05-18 /pmc/articles/PMC7274333/ http://dx.doi.org/10.1007/978-3-030-50146-4_5 Text en © Springer Nature Switzerland AG 2020 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Beer, Michael
Urenda, Julio
Kosheleva, Olga
Kreinovich, Vladik
Why Spiking Neural Networks Are Efficient: A Theorem
title Why Spiking Neural Networks Are Efficient: A Theorem
title_full Why Spiking Neural Networks Are Efficient: A Theorem
title_fullStr Why Spiking Neural Networks Are Efficient: A Theorem
title_full_unstemmed Why Spiking Neural Networks Are Efficient: A Theorem
title_short Why Spiking Neural Networks Are Efficient: A Theorem
title_sort why spiking neural networks are efficient: a theorem
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7274333/
http://dx.doi.org/10.1007/978-3-030-50146-4_5
work_keys_str_mv AT beermichael whyspikingneuralnetworksareefficientatheorem
AT urendajulio whyspikingneuralnetworksareefficientatheorem
AT koshelevaolga whyspikingneuralnetworksareefficientatheorem
AT kreinovichvladik whyspikingneuralnetworksareefficientatheorem