Cargando…
Attentional Bias in Human Category Learning: The Case of Deep Learning
Category learning performance is influenced by both the nature of the category's structure and the way category features are processed during learning. Shepard (1964, 1987) showed that stimuli can have structures with features that are statistically uncorrelated (separable) or statistically cor...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5909172/ https://www.ncbi.nlm.nih.gov/pubmed/29706907 http://dx.doi.org/10.3389/fpsyg.2018.00374 |
_version_ | 1783315846479740928 |
---|---|
author | Hanson, Catherine Caglar, Leyla Roskan Hanson, Stephen José |
author_facet | Hanson, Catherine Caglar, Leyla Roskan Hanson, Stephen José |
author_sort | Hanson, Catherine |
collection | PubMed |
description | Category learning performance is influenced by both the nature of the category's structure and the way category features are processed during learning. Shepard (1964, 1987) showed that stimuli can have structures with features that are statistically uncorrelated (separable) or statistically correlated (integral) within categories. Humans find it much easier to learn categories having separable features, especially when attention to only a subset of relevant features is required, and harder to learn categories having integral features, which require consideration of all of the available features and integration of all the relevant category features satisfying the category rule (Garner, 1974). In contrast to humans, a single hidden layer backpropagation (BP) neural network has been shown to learn both separable and integral categories equally easily, independent of the category rule (Kruschke, 1993). This “failure” to replicate human category performance appeared to be strong evidence that connectionist networks were incapable of modeling human attentional bias. We tested the presumed limitations of attentional bias in networks in two ways: (1) by having networks learn categories with exemplars that have high feature complexity in contrast to the low dimensional stimuli previously used, and (2) by investigating whether a Deep Learning (DL) network, which has demonstrated humanlike performance in many different kinds of tasks (language translation, autonomous driving, etc.), would display human-like attentional bias during category learning. We were able to show a number of interesting results. First, we replicated the failure of BP to differentially process integral and separable category structures when low dimensional stimuli are used (Garner, 1974; Kruschke, 1993). Second, we show that using the same low dimensional stimuli, Deep Learning (DL), unlike BP but similar to humans, learns separable category structures more quickly than integral category structures. Third, we show that even BP can exhibit human like learning differences between integral and separable category structures when high dimensional stimuli (face exemplars) are used. We conclude, after visualizing the hidden unit representations, that DL appears to extend initial learning due to feature development thereby reducing destructive feature competition by incrementally refining feature detectors throughout later layers until a tipping point (in terms of error) is reached resulting in rapid asymptotic learning. |
format | Online Article Text |
id | pubmed-5909172 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-59091722018-04-27 Attentional Bias in Human Category Learning: The Case of Deep Learning Hanson, Catherine Caglar, Leyla Roskan Hanson, Stephen José Front Psychol Psychology Category learning performance is influenced by both the nature of the category's structure and the way category features are processed during learning. Shepard (1964, 1987) showed that stimuli can have structures with features that are statistically uncorrelated (separable) or statistically correlated (integral) within categories. Humans find it much easier to learn categories having separable features, especially when attention to only a subset of relevant features is required, and harder to learn categories having integral features, which require consideration of all of the available features and integration of all the relevant category features satisfying the category rule (Garner, 1974). In contrast to humans, a single hidden layer backpropagation (BP) neural network has been shown to learn both separable and integral categories equally easily, independent of the category rule (Kruschke, 1993). This “failure” to replicate human category performance appeared to be strong evidence that connectionist networks were incapable of modeling human attentional bias. We tested the presumed limitations of attentional bias in networks in two ways: (1) by having networks learn categories with exemplars that have high feature complexity in contrast to the low dimensional stimuli previously used, and (2) by investigating whether a Deep Learning (DL) network, which has demonstrated humanlike performance in many different kinds of tasks (language translation, autonomous driving, etc.), would display human-like attentional bias during category learning. We were able to show a number of interesting results. First, we replicated the failure of BP to differentially process integral and separable category structures when low dimensional stimuli are used (Garner, 1974; Kruschke, 1993). Second, we show that using the same low dimensional stimuli, Deep Learning (DL), unlike BP but similar to humans, learns separable category structures more quickly than integral category structures. Third, we show that even BP can exhibit human like learning differences between integral and separable category structures when high dimensional stimuli (face exemplars) are used. We conclude, after visualizing the hidden unit representations, that DL appears to extend initial learning due to feature development thereby reducing destructive feature competition by incrementally refining feature detectors throughout later layers until a tipping point (in terms of error) is reached resulting in rapid asymptotic learning. Frontiers Media S.A. 2018-04-13 /pmc/articles/PMC5909172/ /pubmed/29706907 http://dx.doi.org/10.3389/fpsyg.2018.00374 Text en Copyright © 2018 Hanson, Caglar and Hanson. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Hanson, Catherine Caglar, Leyla Roskan Hanson, Stephen José Attentional Bias in Human Category Learning: The Case of Deep Learning |
title | Attentional Bias in Human Category Learning: The Case of Deep Learning |
title_full | Attentional Bias in Human Category Learning: The Case of Deep Learning |
title_fullStr | Attentional Bias in Human Category Learning: The Case of Deep Learning |
title_full_unstemmed | Attentional Bias in Human Category Learning: The Case of Deep Learning |
title_short | Attentional Bias in Human Category Learning: The Case of Deep Learning |
title_sort | attentional bias in human category learning: the case of deep learning |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5909172/ https://www.ncbi.nlm.nih.gov/pubmed/29706907 http://dx.doi.org/10.3389/fpsyg.2018.00374 |
work_keys_str_mv | AT hansoncatherine attentionalbiasinhumancategorylearningthecaseofdeeplearning AT caglarleylaroskan attentionalbiasinhumancategorylearningthecaseofdeeplearning AT hansonstephenjose attentionalbiasinhumancategorylearningthecaseofdeeplearning |