Cargando…

Algorithmic Probability-Guided Machine Learning on Non-Differentiable Spaces

We show how complexity theory can be introduced in machine learning to help bring together apparently disparate areas of current research. We show that this model-driven approach may require less training data and can potentially be more generalizable as it shows greater resilience to random attacks...

Descripción completa

Detalles Bibliográficos
Autores principales: Hernández-Orozco, Santiago, Zenil, Hector, Riedel, Jürgen, Uccello, Adam, Kiani, Narsis A., Tegnér, Jesper
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7944352/
https://www.ncbi.nlm.nih.gov/pubmed/33733213
http://dx.doi.org/10.3389/frai.2020.567356
_version_ 1783662671778807808
author Hernández-Orozco, Santiago
Zenil, Hector
Riedel, Jürgen
Uccello, Adam
Kiani, Narsis A.
Tegnér, Jesper
author_facet Hernández-Orozco, Santiago
Zenil, Hector
Riedel, Jürgen
Uccello, Adam
Kiani, Narsis A.
Tegnér, Jesper
author_sort Hernández-Orozco, Santiago
collection PubMed
description We show how complexity theory can be introduced in machine learning to help bring together apparently disparate areas of current research. We show that this model-driven approach may require less training data and can potentially be more generalizable as it shows greater resilience to random attacks. In an algorithmic space the order of its element is given by its algorithmic probability, which arises naturally from computable processes. We investigate the shape of a discrete algorithmic space when performing regression or classification using a loss function parametrized by algorithmic complexity, demonstrating that the property of differentiation is not required to achieve results similar to those obtained using differentiable programming approaches such as deep learning. In doing so we use examples which enable the two approaches to be compared (small, given the computational power required for estimations of algorithmic complexity). We find and report that 1) machine learning can successfully be performed on a non-smooth surface using algorithmic complexity; 2) that solutions can be found using an algorithmic-probability classifier, establishing a bridge between a fundamentally discrete theory of computability and a fundamentally continuous mathematical theory of optimization methods; 3) a formulation of an algorithmically directed search technique in non-smooth manifolds can be defined and conducted; 4) exploitation techniques and numerical methods for algorithmic search to navigate these discrete non-differentiable spaces can be performed; in application of the (a) identification of generative rules from data observations; (b) solutions to image classification problems more resilient against pixel attacks compared to neural networks; (c) identification of equation parameters from a small data-set in the presence of noise in continuous ODE system problem, (d) classification of Boolean NK networks by (1) network topology, (2) underlying Boolean function, and (3) number of incoming edges.
format Online
Article
Text
id pubmed-7944352
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-79443522021-03-16 Algorithmic Probability-Guided Machine Learning on Non-Differentiable Spaces Hernández-Orozco, Santiago Zenil, Hector Riedel, Jürgen Uccello, Adam Kiani, Narsis A. Tegnér, Jesper Front Artif Intell Artificial Intelligence We show how complexity theory can be introduced in machine learning to help bring together apparently disparate areas of current research. We show that this model-driven approach may require less training data and can potentially be more generalizable as it shows greater resilience to random attacks. In an algorithmic space the order of its element is given by its algorithmic probability, which arises naturally from computable processes. We investigate the shape of a discrete algorithmic space when performing regression or classification using a loss function parametrized by algorithmic complexity, demonstrating that the property of differentiation is not required to achieve results similar to those obtained using differentiable programming approaches such as deep learning. In doing so we use examples which enable the two approaches to be compared (small, given the computational power required for estimations of algorithmic complexity). We find and report that 1) machine learning can successfully be performed on a non-smooth surface using algorithmic complexity; 2) that solutions can be found using an algorithmic-probability classifier, establishing a bridge between a fundamentally discrete theory of computability and a fundamentally continuous mathematical theory of optimization methods; 3) a formulation of an algorithmically directed search technique in non-smooth manifolds can be defined and conducted; 4) exploitation techniques and numerical methods for algorithmic search to navigate these discrete non-differentiable spaces can be performed; in application of the (a) identification of generative rules from data observations; (b) solutions to image classification problems more resilient against pixel attacks compared to neural networks; (c) identification of equation parameters from a small data-set in the presence of noise in continuous ODE system problem, (d) classification of Boolean NK networks by (1) network topology, (2) underlying Boolean function, and (3) number of incoming edges. Frontiers Media S.A. 2021-01-25 /pmc/articles/PMC7944352/ /pubmed/33733213 http://dx.doi.org/10.3389/frai.2020.567356 Text en Copyright © 2021 Hernández-Orozco, Zenil, Riedel, Uccello, Kiani and Tegnér. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) (http://creativecommons.org/licenses/by/4.0/) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Hernández-Orozco, Santiago
Zenil, Hector
Riedel, Jürgen
Uccello, Adam
Kiani, Narsis A.
Tegnér, Jesper
Algorithmic Probability-Guided Machine Learning on Non-Differentiable Spaces
title Algorithmic Probability-Guided Machine Learning on Non-Differentiable Spaces
title_full Algorithmic Probability-Guided Machine Learning on Non-Differentiable Spaces
title_fullStr Algorithmic Probability-Guided Machine Learning on Non-Differentiable Spaces
title_full_unstemmed Algorithmic Probability-Guided Machine Learning on Non-Differentiable Spaces
title_short Algorithmic Probability-Guided Machine Learning on Non-Differentiable Spaces
title_sort algorithmic probability-guided machine learning on non-differentiable spaces
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7944352/
https://www.ncbi.nlm.nih.gov/pubmed/33733213
http://dx.doi.org/10.3389/frai.2020.567356
work_keys_str_mv AT hernandezorozcosantiago algorithmicprobabilityguidedmachinelearningonnondifferentiablespaces
AT zenilhector algorithmicprobabilityguidedmachinelearningonnondifferentiablespaces
AT riedeljurgen algorithmicprobabilityguidedmachinelearningonnondifferentiablespaces
AT uccelloadam algorithmicprobabilityguidedmachinelearningonnondifferentiablespaces
AT kianinarsisa algorithmicprobabilityguidedmachinelearningonnondifferentiablespaces
AT tegnerjesper algorithmicprobabilityguidedmachinelearningonnondifferentiablespaces