Cargando…

A parameter-free learning automaton scheme

For a learning automaton, a proper configuration of the learning parameters is crucial. To ensure stable and reliable performance in stochastic environments, manual parameter tuning is necessary for existing LA schemes, but the tuning procedure is time-consuming and interaction-costing. It is a fata...

Descripción completa

Detalles Bibliográficos
Autores principales: Ren, Xudie, Li, Shenghong, Ge, Hao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9539663/
https://www.ncbi.nlm.nih.gov/pubmed/36213147
http://dx.doi.org/10.3389/fnbot.2022.999658
_version_ 1784803538395725824
author Ren, Xudie
Li, Shenghong
Ge, Hao
author_facet Ren, Xudie
Li, Shenghong
Ge, Hao
author_sort Ren, Xudie
collection PubMed
description For a learning automaton, a proper configuration of the learning parameters is crucial. To ensure stable and reliable performance in stochastic environments, manual parameter tuning is necessary for existing LA schemes, but the tuning procedure is time-consuming and interaction-costing. It is a fatal limitation for LA-based applications, especially for those environments where the interactions are expensive. In this paper, we propose a parameter-free learning automaton (PFLA) scheme to avoid parameter tuning by a Bayesian inference method. In contrast to existing schemes where the parameters must be carefully tuned according to the environment, PFLA works well with a set of consistent parameters in various environments. This intriguing property dramatically reduces the difficulty of applying a learning automaton to an unknown stochastic environment. A rigorous proof of ϵ-optimality for the proposed scheme is provided and numeric experiments are carried out on benchmark environments to verify its effectiveness. The results show that, without any parameter tuning cost, the proposed PFLA can achieve a competitive performance compared with other well-tuned schemes and outperform untuned schemes on the consistency of performance.
format Online
Article
Text
id pubmed-9539663
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-95396632022-10-08 A parameter-free learning automaton scheme Ren, Xudie Li, Shenghong Ge, Hao Front Neurorobot Neuroscience For a learning automaton, a proper configuration of the learning parameters is crucial. To ensure stable and reliable performance in stochastic environments, manual parameter tuning is necessary for existing LA schemes, but the tuning procedure is time-consuming and interaction-costing. It is a fatal limitation for LA-based applications, especially for those environments where the interactions are expensive. In this paper, we propose a parameter-free learning automaton (PFLA) scheme to avoid parameter tuning by a Bayesian inference method. In contrast to existing schemes where the parameters must be carefully tuned according to the environment, PFLA works well with a set of consistent parameters in various environments. This intriguing property dramatically reduces the difficulty of applying a learning automaton to an unknown stochastic environment. A rigorous proof of ϵ-optimality for the proposed scheme is provided and numeric experiments are carried out on benchmark environments to verify its effectiveness. The results show that, without any parameter tuning cost, the proposed PFLA can achieve a competitive performance compared with other well-tuned schemes and outperform untuned schemes on the consistency of performance. Frontiers Media S.A. 2022-09-23 /pmc/articles/PMC9539663/ /pubmed/36213147 http://dx.doi.org/10.3389/fnbot.2022.999658 Text en Copyright © 2022 Ren, Li and Ge. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Ren, Xudie
Li, Shenghong
Ge, Hao
A parameter-free learning automaton scheme
title A parameter-free learning automaton scheme
title_full A parameter-free learning automaton scheme
title_fullStr A parameter-free learning automaton scheme
title_full_unstemmed A parameter-free learning automaton scheme
title_short A parameter-free learning automaton scheme
title_sort parameter-free learning automaton scheme
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9539663/
https://www.ncbi.nlm.nih.gov/pubmed/36213147
http://dx.doi.org/10.3389/fnbot.2022.999658
work_keys_str_mv AT renxudie aparameterfreelearningautomatonscheme
AT lishenghong aparameterfreelearningautomatonscheme
AT gehao aparameterfreelearningautomatonscheme
AT renxudie parameterfreelearningautomatonscheme
AT lishenghong parameterfreelearningautomatonscheme
AT gehao parameterfreelearningautomatonscheme