Cargando…

The World as a Neural Network

We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: “trainable” variables (e.g., bias vector or weight matrix) and “hidden” variables (e.g., state vector of neurons). We first consider s...

Descripción completa

Detalles Bibliográficos
Autor principal: Vanchurin, Vitaly
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7712105/
https://www.ncbi.nlm.nih.gov/pubmed/33286978
http://dx.doi.org/10.3390/e22111210
_version_ 1783618296455626752
author Vanchurin, Vitaly
author_facet Vanchurin, Vitaly
author_sort Vanchurin, Vitaly
collection PubMed
description We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: “trainable” variables (e.g., bias vector or weight matrix) and “hidden” variables (e.g., state vector of neurons). We first consider stochastic evolution of the trainable variables to argue that near equilibrium their dynamics is well approximated by Madelung equations (with free energy representing the phase) and further away from the equilibrium by Hamilton–Jacobi equations (with free energy representing the Hamilton’s principal function). This shows that the trainable variables can indeed exhibit classical and quantum behaviors with the state vector of neurons representing the hidden variables. We then study stochastic evolution of the hidden variables by considering D non-interacting subsystems with average state vectors, [Formula: see text] , …, [Formula: see text] and an overall average state vector [Formula: see text]. In the limit when the weight matrix is a permutation matrix, the dynamics of [Formula: see text] can be described in terms of relativistic strings in an emergent [Formula: see text] dimensional Minkowski space-time. If the subsystems are minimally interacting, with interactions that are described by a metric tensor, and then the emergent space-time becomes curved. We argue that the entropy production in such a system is a local function of the metric tensor which should be determined by the symmetries of the Onsager tensor. It turns out that a very simple and highly symmetric Onsager tensor leads to the entropy production described by the Einstein–Hilbert term. This shows that the learning dynamics of a neural network can indeed exhibit approximate behaviors that were described by both quantum mechanics and general relativity. We also discuss a possibility that the two descriptions are holographic duals of each other.
format Online
Article
Text
id pubmed-7712105
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-77121052021-02-24 The World as a Neural Network Vanchurin, Vitaly Entropy (Basel) Article We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: “trainable” variables (e.g., bias vector or weight matrix) and “hidden” variables (e.g., state vector of neurons). We first consider stochastic evolution of the trainable variables to argue that near equilibrium their dynamics is well approximated by Madelung equations (with free energy representing the phase) and further away from the equilibrium by Hamilton–Jacobi equations (with free energy representing the Hamilton’s principal function). This shows that the trainable variables can indeed exhibit classical and quantum behaviors with the state vector of neurons representing the hidden variables. We then study stochastic evolution of the hidden variables by considering D non-interacting subsystems with average state vectors, [Formula: see text] , …, [Formula: see text] and an overall average state vector [Formula: see text]. In the limit when the weight matrix is a permutation matrix, the dynamics of [Formula: see text] can be described in terms of relativistic strings in an emergent [Formula: see text] dimensional Minkowski space-time. If the subsystems are minimally interacting, with interactions that are described by a metric tensor, and then the emergent space-time becomes curved. We argue that the entropy production in such a system is a local function of the metric tensor which should be determined by the symmetries of the Onsager tensor. It turns out that a very simple and highly symmetric Onsager tensor leads to the entropy production described by the Einstein–Hilbert term. This shows that the learning dynamics of a neural network can indeed exhibit approximate behaviors that were described by both quantum mechanics and general relativity. We also discuss a possibility that the two descriptions are holographic duals of each other. MDPI 2020-10-26 /pmc/articles/PMC7712105/ /pubmed/33286978 http://dx.doi.org/10.3390/e22111210 Text en © 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Vanchurin, Vitaly
The World as a Neural Network
title The World as a Neural Network
title_full The World as a Neural Network
title_fullStr The World as a Neural Network
title_full_unstemmed The World as a Neural Network
title_short The World as a Neural Network
title_sort world as a neural network
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7712105/
https://www.ncbi.nlm.nih.gov/pubmed/33286978
http://dx.doi.org/10.3390/e22111210
work_keys_str_mv AT vanchurinvitaly theworldasaneuralnetwork
AT vanchurinvitaly worldasaneuralnetwork