Cargando…

Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks

In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and...

Descripción completa

Detalles Bibliográficos
Autores principales: Hillar, Christopher, Chan, Tenzin, Taubman, Rachel, Rolnick, David
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8622935/
https://www.ncbi.nlm.nih.gov/pubmed/34828192
http://dx.doi.org/10.3390/e23111494
_version_ 1784605810892996608
author Hillar, Christopher
Chan, Tenzin
Taubman, Rachel
Rolnick, David
author_facet Hillar, Christopher
Chan, Tenzin
Taubman, Rachel
Rolnick, David
author_sort Hillar, Christopher
collection PubMed
description In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and then compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover n-node networks with robust storage of [Formula: see text] memories for any [Formula: see text]. In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely.
format Online
Article
Text
id pubmed-8622935
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-86229352021-11-27 Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks Hillar, Christopher Chan, Tenzin Taubman, Rachel Rolnick, David Entropy (Basel) Article In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and then compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover n-node networks with robust storage of [Formula: see text] memories for any [Formula: see text]. In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely. MDPI 2021-11-11 /pmc/articles/PMC8622935/ /pubmed/34828192 http://dx.doi.org/10.3390/e23111494 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Hillar, Christopher
Chan, Tenzin
Taubman, Rachel
Rolnick, David
Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
title Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
title_full Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
title_fullStr Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
title_full_unstemmed Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
title_short Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
title_sort hidden hypergraphs, error-correcting codes, and critical learning in hopfield networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8622935/
https://www.ncbi.nlm.nih.gov/pubmed/34828192
http://dx.doi.org/10.3390/e23111494
work_keys_str_mv AT hillarchristopher hiddenhypergraphserrorcorrectingcodesandcriticallearninginhopfieldnetworks
AT chantenzin hiddenhypergraphserrorcorrectingcodesandcriticallearninginhopfieldnetworks
AT taubmanrachel hiddenhypergraphserrorcorrectingcodesandcriticallearninginhopfieldnetworks
AT rolnickdavid hiddenhypergraphserrorcorrectingcodesandcriticallearninginhopfieldnetworks