Cargando…

Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning

In recent years artificial neural networks achieved performance close to or better than humans in several domains: tasks that were previously human prerogatives, such as language processing, have witnessed remarkable improvements in state of the art models. One advantage of this technological boost...

Descripción completa

Detalles Bibliográficos
Autores principales: Alamia, Andrea, Gauducheau, Victor, Paisios, Dimitri, VanRullen, Rufin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7747619/
https://www.ncbi.nlm.nih.gov/pubmed/33335190
http://dx.doi.org/10.1038/s41598-020-79127-y
_version_ 1783624972175933440
author Alamia, Andrea
Gauducheau, Victor
Paisios, Dimitri
VanRullen, Rufin
author_facet Alamia, Andrea
Gauducheau, Victor
Paisios, Dimitri
VanRullen, Rufin
author_sort Alamia, Andrea
collection PubMed
description In recent years artificial neural networks achieved performance close to or better than humans in several domains: tasks that were previously human prerogatives, such as language processing, have witnessed remarkable improvements in state of the art models. One advantage of this technological boost is to facilitate comparison between different neural networks and human performance, in order to deepen our understanding of human cognition. Here, we investigate which neural network architecture (feedforward vs. recurrent) matches human behavior in artificial grammar learning, a crucial aspect of language acquisition. Prior experimental studies proved that artificial grammars can be learnt by human subjects after little exposure and often without explicit knowledge of the underlying rules. We tested four grammars with different complexity levels both in humans and in feedforward and recurrent networks. Our results show that both architectures can “learn” (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level. Moreover, similar to visual processing, in which feedforward and recurrent architectures have been related to unconscious and conscious processes, the difference in performance between architectures over ten regular grammars shows that simpler and more explicit grammars are better learnt by recurrent architectures, supporting the hypothesis that explicit learning is best modeled by recurrent networks, whereas feedforward networks supposedly capture the dynamics involved in implicit learning.
format Online
Article
Text
id pubmed-7747619
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-77476192020-12-18 Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning Alamia, Andrea Gauducheau, Victor Paisios, Dimitri VanRullen, Rufin Sci Rep Article In recent years artificial neural networks achieved performance close to or better than humans in several domains: tasks that were previously human prerogatives, such as language processing, have witnessed remarkable improvements in state of the art models. One advantage of this technological boost is to facilitate comparison between different neural networks and human performance, in order to deepen our understanding of human cognition. Here, we investigate which neural network architecture (feedforward vs. recurrent) matches human behavior in artificial grammar learning, a crucial aspect of language acquisition. Prior experimental studies proved that artificial grammars can be learnt by human subjects after little exposure and often without explicit knowledge of the underlying rules. We tested four grammars with different complexity levels both in humans and in feedforward and recurrent networks. Our results show that both architectures can “learn” (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level. Moreover, similar to visual processing, in which feedforward and recurrent architectures have been related to unconscious and conscious processes, the difference in performance between architectures over ten regular grammars shows that simpler and more explicit grammars are better learnt by recurrent architectures, supporting the hypothesis that explicit learning is best modeled by recurrent networks, whereas feedforward networks supposedly capture the dynamics involved in implicit learning. Nature Publishing Group UK 2020-12-17 /pmc/articles/PMC7747619/ /pubmed/33335190 http://dx.doi.org/10.1038/s41598-020-79127-y Text en © The Author(s) 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Article
Alamia, Andrea
Gauducheau, Victor
Paisios, Dimitri
VanRullen, Rufin
Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning
title Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning
title_full Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning
title_fullStr Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning
title_full_unstemmed Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning
title_short Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning
title_sort comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7747619/
https://www.ncbi.nlm.nih.gov/pubmed/33335190
http://dx.doi.org/10.1038/s41598-020-79127-y
work_keys_str_mv AT alamiaandrea comparingfeedforwardandrecurrentneuralnetworkarchitectureswithhumanbehaviorinartificialgrammarlearning
AT gauducheauvictor comparingfeedforwardandrecurrentneuralnetworkarchitectureswithhumanbehaviorinartificialgrammarlearning
AT paisiosdimitri comparingfeedforwardandrecurrentneuralnetworkarchitectureswithhumanbehaviorinartificialgrammarlearning
AT vanrullenrufin comparingfeedforwardandrecurrentneuralnetworkarchitectureswithhumanbehaviorinartificialgrammarlearning