Cargando…

On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks

A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1) the developmental process that links genes to n...

Descripción completa

Detalles Bibliográficos
Autores principales: Tonelli, Paul, Mouret, Jean-Baptiste
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2013
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3827315/
https://www.ncbi.nlm.nih.gov/pubmed/24236099
http://dx.doi.org/10.1371/journal.pone.0079138
_version_ 1782291055323906048
author Tonelli, Paul
Mouret, Jean-Baptiste
author_facet Tonelli, Paul
Mouret, Jean-Baptiste
author_sort Tonelli, Paul
collection PubMed
description A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1) the developmental process that links genes to nervous systems, which enables the evolution of large, regular neural networks, and (2) synaptic plasticity, which allows neural networks to change during their lifetime. So far, these two topics have been mainly studied separately. The present paper shows that they are actually deeply connected. Using a simple operant conditioning task and a classic evolutionary algorithm, we compare three ways to encode plastic neural networks: a direct encoding, a developmental encoding inspired by computational neuroscience models, and a developmental encoding inspired by morphogen gradients (similar to HyperNEAT). Our results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks. Complementary experiments reveal that this result is likely the consequence of the bias of developmental encodings towards regular structures: (1) in our experimental setup, encodings that tend to produce more regular networks yield networks with better general learning abilities; (2) whatever the encoding is, networks that are the more regular are statistically those that have the best learning abilities.
format Online
Article
Text
id pubmed-3827315
institution National Center for Biotechnology Information
language English
publishDate 2013
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-38273152013-11-14 On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks Tonelli, Paul Mouret, Jean-Baptiste PLoS One Research Article A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1) the developmental process that links genes to nervous systems, which enables the evolution of large, regular neural networks, and (2) synaptic plasticity, which allows neural networks to change during their lifetime. So far, these two topics have been mainly studied separately. The present paper shows that they are actually deeply connected. Using a simple operant conditioning task and a classic evolutionary algorithm, we compare three ways to encode plastic neural networks: a direct encoding, a developmental encoding inspired by computational neuroscience models, and a developmental encoding inspired by morphogen gradients (similar to HyperNEAT). Our results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks. Complementary experiments reveal that this result is likely the consequence of the bias of developmental encodings towards regular structures: (1) in our experimental setup, encodings that tend to produce more regular networks yield networks with better general learning abilities; (2) whatever the encoding is, networks that are the more regular are statistically those that have the best learning abilities. Public Library of Science 2013-11-13 /pmc/articles/PMC3827315/ /pubmed/24236099 http://dx.doi.org/10.1371/journal.pone.0079138 Text en © 2013 Tonelli, Mouret http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
Tonelli, Paul
Mouret, Jean-Baptiste
On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks
title On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks
title_full On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks
title_fullStr On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks
title_full_unstemmed On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks
title_short On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks
title_sort on the relationships between generative encodings, regularity, and learning abilities when evolving plastic artificial neural networks
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3827315/
https://www.ncbi.nlm.nih.gov/pubmed/24236099
http://dx.doi.org/10.1371/journal.pone.0079138
work_keys_str_mv AT tonellipaul ontherelationshipsbetweengenerativeencodingsregularityandlearningabilitieswhenevolvingplasticartificialneuralnetworks
AT mouretjeanbaptiste ontherelationshipsbetweengenerativeencodingsregularityandlearningabilitieswhenevolvingplasticartificialneuralnetworks