Cargando…

CNNs reveal the computational implausibility of the expertise hypothesis

Face perception has long served as a classic example of domain specificity of mind and brain. But an alternative “expertise” hypothesis holds that putatively face-specific mechanisms are actually domain-general, and can be recruited for the perception of other objects of expertise (e.g., cars for ca...

Descripción completa

Detalles Bibliográficos
Autores principales: Kanwisher, Nancy, Gupta, Pranjul, Dobs, Katharina
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9923184/
https://www.ncbi.nlm.nih.gov/pubmed/36794151
http://dx.doi.org/10.1016/j.isci.2023.105976
_version_ 1784887683094413312
author Kanwisher, Nancy
Gupta, Pranjul
Dobs, Katharina
author_facet Kanwisher, Nancy
Gupta, Pranjul
Dobs, Katharina
author_sort Kanwisher, Nancy
collection PubMed
description Face perception has long served as a classic example of domain specificity of mind and brain. But an alternative “expertise” hypothesis holds that putatively face-specific mechanisms are actually domain-general, and can be recruited for the perception of other objects of expertise (e.g., cars for car experts). Here, we demonstrate the computational implausibility of this hypothesis: Neural network models optimized for generic object categorization provide a better foundation for expert fine-grained discrimination than do models optimized for face recognition.
format Online
Article
Text
id pubmed-9923184
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-99231842023-02-14 CNNs reveal the computational implausibility of the expertise hypothesis Kanwisher, Nancy Gupta, Pranjul Dobs, Katharina iScience Article Face perception has long served as a classic example of domain specificity of mind and brain. But an alternative “expertise” hypothesis holds that putatively face-specific mechanisms are actually domain-general, and can be recruited for the perception of other objects of expertise (e.g., cars for car experts). Here, we demonstrate the computational implausibility of this hypothesis: Neural network models optimized for generic object categorization provide a better foundation for expert fine-grained discrimination than do models optimized for face recognition. Elsevier 2023-01-14 /pmc/articles/PMC9923184/ /pubmed/36794151 http://dx.doi.org/10.1016/j.isci.2023.105976 Text en © 2023 The Author(s) https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
spellingShingle Article
Kanwisher, Nancy
Gupta, Pranjul
Dobs, Katharina
CNNs reveal the computational implausibility of the expertise hypothesis
title CNNs reveal the computational implausibility of the expertise hypothesis
title_full CNNs reveal the computational implausibility of the expertise hypothesis
title_fullStr CNNs reveal the computational implausibility of the expertise hypothesis
title_full_unstemmed CNNs reveal the computational implausibility of the expertise hypothesis
title_short CNNs reveal the computational implausibility of the expertise hypothesis
title_sort cnns reveal the computational implausibility of the expertise hypothesis
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9923184/
https://www.ncbi.nlm.nih.gov/pubmed/36794151
http://dx.doi.org/10.1016/j.isci.2023.105976
work_keys_str_mv AT kanwishernancy cnnsrevealthecomputationalimplausibilityoftheexpertisehypothesis
AT guptapranjul cnnsrevealthecomputationalimplausibilityoftheexpertisehypothesis
AT dobskatharina cnnsrevealthecomputationalimplausibilityoftheexpertisehypothesis