Cargando…

Stochastic surprisal: An inferential measurement of free energy in neural networks

This paper conjectures and validates a framework that allows for action during inference in supervised neural networks. Supervised neural networks are constructed with the objective to maximize their performance metric in any given task. This is done by reducing free energy and its associated surpri...

Descripción completa

Detalles Bibliográficos
Autores principales: Prabhushankar, Mohit, AlRegib, Ghassan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10043257/
https://www.ncbi.nlm.nih.gov/pubmed/36998731
http://dx.doi.org/10.3389/fnins.2023.926418
_version_ 1784913105590943744
author Prabhushankar, Mohit
AlRegib, Ghassan
author_facet Prabhushankar, Mohit
AlRegib, Ghassan
author_sort Prabhushankar, Mohit
collection PubMed
description This paper conjectures and validates a framework that allows for action during inference in supervised neural networks. Supervised neural networks are constructed with the objective to maximize their performance metric in any given task. This is done by reducing free energy and its associated surprisal during training. However, the bottom-up inference nature of supervised networks is a passive process that renders them fallible to noise. In this paper, we provide a thorough background of supervised neural networks, both generative and discriminative, and discuss their functionality from the perspective of free energy principle. We then provide a framework for introducing action during inference. We introduce a new measurement called stochastic surprisal that is a function of the network, the input, and any possible action. This action can be any one of the outputs that the neural network has learnt, thereby lending stochasticity to the measurement. Stochastic surprisal is validated on two applications: Image Quality Assessment and Recognition under noisy conditions. We show that, while noise characteristics are ignored to make robust recognition, they are analyzed to estimate image quality scores. We apply stochastic surprisal on two applications, three datasets, and as a plug-in on 12 networks. In all, it provides a statistically significant increase among all measures. We conclude by discussing the implications of the proposed stochastic surprisal in other areas of cognitive psychology including expectancy-mismatch and abductive reasoning.
format Online
Article
Text
id pubmed-10043257
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-100432572023-03-29 Stochastic surprisal: An inferential measurement of free energy in neural networks Prabhushankar, Mohit AlRegib, Ghassan Front Neurosci Neuroscience This paper conjectures and validates a framework that allows for action during inference in supervised neural networks. Supervised neural networks are constructed with the objective to maximize their performance metric in any given task. This is done by reducing free energy and its associated surprisal during training. However, the bottom-up inference nature of supervised networks is a passive process that renders them fallible to noise. In this paper, we provide a thorough background of supervised neural networks, both generative and discriminative, and discuss their functionality from the perspective of free energy principle. We then provide a framework for introducing action during inference. We introduce a new measurement called stochastic surprisal that is a function of the network, the input, and any possible action. This action can be any one of the outputs that the neural network has learnt, thereby lending stochasticity to the measurement. Stochastic surprisal is validated on two applications: Image Quality Assessment and Recognition under noisy conditions. We show that, while noise characteristics are ignored to make robust recognition, they are analyzed to estimate image quality scores. We apply stochastic surprisal on two applications, three datasets, and as a plug-in on 12 networks. In all, it provides a statistically significant increase among all measures. We conclude by discussing the implications of the proposed stochastic surprisal in other areas of cognitive psychology including expectancy-mismatch and abductive reasoning. Frontiers Media S.A. 2023-03-14 /pmc/articles/PMC10043257/ /pubmed/36998731 http://dx.doi.org/10.3389/fnins.2023.926418 Text en Copyright © 2023 Prabhushankar and AlRegib. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Prabhushankar, Mohit
AlRegib, Ghassan
Stochastic surprisal: An inferential measurement of free energy in neural networks
title Stochastic surprisal: An inferential measurement of free energy in neural networks
title_full Stochastic surprisal: An inferential measurement of free energy in neural networks
title_fullStr Stochastic surprisal: An inferential measurement of free energy in neural networks
title_full_unstemmed Stochastic surprisal: An inferential measurement of free energy in neural networks
title_short Stochastic surprisal: An inferential measurement of free energy in neural networks
title_sort stochastic surprisal: an inferential measurement of free energy in neural networks
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10043257/
https://www.ncbi.nlm.nih.gov/pubmed/36998731
http://dx.doi.org/10.3389/fnins.2023.926418
work_keys_str_mv AT prabhushankarmohit stochasticsurprisalaninferentialmeasurementoffreeenergyinneuralnetworks
AT alregibghassan stochasticsurprisalaninferentialmeasurementoffreeenergyinneuralnetworks