Cargando…

Assessing, Testing and Estimating the Amount of Fine-Tuning by Means of Active Information

A general framework is introduced to estimate how much external information has been infused into a search algorithm, the so-called active information. This is rephrased as a test of fine-tuning, where tuning corresponds to the amount of pre-specified knowledge that the algorithm makes use of in ord...

Descripción completa

Detalles Bibliográficos
Autores principales: Díaz-Pachón, Daniel Andrés, Hössjer, Ola
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9601319/
https://www.ncbi.nlm.nih.gov/pubmed/37420343
http://dx.doi.org/10.3390/e24101323
_version_ 1784817034261954560
author Díaz-Pachón, Daniel Andrés
Hössjer, Ola
author_facet Díaz-Pachón, Daniel Andrés
Hössjer, Ola
author_sort Díaz-Pachón, Daniel Andrés
collection PubMed
description A general framework is introduced to estimate how much external information has been infused into a search algorithm, the so-called active information. This is rephrased as a test of fine-tuning, where tuning corresponds to the amount of pre-specified knowledge that the algorithm makes use of in order to reach a certain target. A function f quantifies specificity for each possible outcome x of a search, so that the target of the algorithm is a set of highly specified states, whereas fine-tuning occurs if it is much more likely for the algorithm to reach the target as intended than by chance. The distribution of a random outcome X of the algorithm involves a parameter [Formula: see text] that quantifies how much background information has been infused. A simple choice of this parameter is to use [Formula: see text] in order to exponentially tilt the distribution of the outcome of the search algorithm under the null distribution of no tuning, so that an exponential family of distributions is obtained. Such algorithms are obtained by iterating a Metropolis–Hastings type of Markov chain, which makes it possible to compute their active information under the equilibrium and non-equilibrium of the Markov chain, with or without stopping when the targeted set of fine-tuned states has been reached. Other choices of tuning parameters [Formula: see text] are discussed as well. Nonparametric and parametric estimators of active information and tests of fine-tuning are developed when repeated and independent outcomes of the algorithm are available. The theory is illustrated with examples from cosmology, student learning, reinforcement learning, a Moran type model of population genetics, and evolutionary programming.
format Online
Article
Text
id pubmed-9601319
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-96013192022-10-27 Assessing, Testing and Estimating the Amount of Fine-Tuning by Means of Active Information Díaz-Pachón, Daniel Andrés Hössjer, Ola Entropy (Basel) Article A general framework is introduced to estimate how much external information has been infused into a search algorithm, the so-called active information. This is rephrased as a test of fine-tuning, where tuning corresponds to the amount of pre-specified knowledge that the algorithm makes use of in order to reach a certain target. A function f quantifies specificity for each possible outcome x of a search, so that the target of the algorithm is a set of highly specified states, whereas fine-tuning occurs if it is much more likely for the algorithm to reach the target as intended than by chance. The distribution of a random outcome X of the algorithm involves a parameter [Formula: see text] that quantifies how much background information has been infused. A simple choice of this parameter is to use [Formula: see text] in order to exponentially tilt the distribution of the outcome of the search algorithm under the null distribution of no tuning, so that an exponential family of distributions is obtained. Such algorithms are obtained by iterating a Metropolis–Hastings type of Markov chain, which makes it possible to compute their active information under the equilibrium and non-equilibrium of the Markov chain, with or without stopping when the targeted set of fine-tuned states has been reached. Other choices of tuning parameters [Formula: see text] are discussed as well. Nonparametric and parametric estimators of active information and tests of fine-tuning are developed when repeated and independent outcomes of the algorithm are available. The theory is illustrated with examples from cosmology, student learning, reinforcement learning, a Moran type model of population genetics, and evolutionary programming. MDPI 2022-09-21 /pmc/articles/PMC9601319/ /pubmed/37420343 http://dx.doi.org/10.3390/e24101323 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Díaz-Pachón, Daniel Andrés
Hössjer, Ola
Assessing, Testing and Estimating the Amount of Fine-Tuning by Means of Active Information
title Assessing, Testing and Estimating the Amount of Fine-Tuning by Means of Active Information
title_full Assessing, Testing and Estimating the Amount of Fine-Tuning by Means of Active Information
title_fullStr Assessing, Testing and Estimating the Amount of Fine-Tuning by Means of Active Information
title_full_unstemmed Assessing, Testing and Estimating the Amount of Fine-Tuning by Means of Active Information
title_short Assessing, Testing and Estimating the Amount of Fine-Tuning by Means of Active Information
title_sort assessing, testing and estimating the amount of fine-tuning by means of active information
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9601319/
https://www.ncbi.nlm.nih.gov/pubmed/37420343
http://dx.doi.org/10.3390/e24101323
work_keys_str_mv AT diazpachondanielandres assessingtestingandestimatingtheamountoffinetuningbymeansofactiveinformation
AT hossjerola assessingtestingandestimatingtheamountoffinetuningbymeansofactiveinformation