Cargando…

Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity

Deep neural networks (DNNs) can resolve real-world categorization tasks with apparent human-level performance. However, true equivalence of behavioral performance between humans and their DNN models requires that their internal mechanisms process equivalent features of the stimulus. To develop such...

Descripción completa

Detalles Bibliográficos
Autores principales: Daube, Christoph, Xu, Tian, Zhan, Jiayu, Webb, Andrew, Ince, Robin A.A., Garrod, Oliver G.B., Schyns, Philippe G.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8515012/
https://www.ncbi.nlm.nih.gov/pubmed/34693374
http://dx.doi.org/10.1016/j.patter.2021.100348
_version_ 1784583525261901824
author Daube, Christoph
Xu, Tian
Zhan, Jiayu
Webb, Andrew
Ince, Robin A.A.
Garrod, Oliver G.B.
Schyns, Philippe G.
author_facet Daube, Christoph
Xu, Tian
Zhan, Jiayu
Webb, Andrew
Ince, Robin A.A.
Garrod, Oliver G.B.
Schyns, Philippe G.
author_sort Daube, Christoph
collection PubMed
description Deep neural networks (DNNs) can resolve real-world categorization tasks with apparent human-level performance. However, true equivalence of behavioral performance between humans and their DNN models requires that their internal mechanisms process equivalent features of the stimulus. To develop such feature equivalence, our methodology leveraged an interpretable and experimentally controlled generative model of the stimuli (realistic three-dimensional textured faces). Humans rated the similarity of randomly generated faces to four familiar identities. We predicted these similarity ratings from the activations of five DNNs trained with different optimization objectives. Using information theoretic redundancy, reverse correlation, and the testing of generalization gradients, we show that DNN predictions of human behavior improve because their shape and texture features overlap with those that subsume human behavior. Thus, we must equate the functional features that subsume the behavioral performances of the brain and its models before comparing where, when, and how these features are processed.
format Online
Article
Text
id pubmed-8515012
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-85150122021-10-21 Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity Daube, Christoph Xu, Tian Zhan, Jiayu Webb, Andrew Ince, Robin A.A. Garrod, Oliver G.B. Schyns, Philippe G. Patterns (N Y) Article Deep neural networks (DNNs) can resolve real-world categorization tasks with apparent human-level performance. However, true equivalence of behavioral performance between humans and their DNN models requires that their internal mechanisms process equivalent features of the stimulus. To develop such feature equivalence, our methodology leveraged an interpretable and experimentally controlled generative model of the stimuli (realistic three-dimensional textured faces). Humans rated the similarity of randomly generated faces to four familiar identities. We predicted these similarity ratings from the activations of five DNNs trained with different optimization objectives. Using information theoretic redundancy, reverse correlation, and the testing of generalization gradients, we show that DNN predictions of human behavior improve because their shape and texture features overlap with those that subsume human behavior. Thus, we must equate the functional features that subsume the behavioral performances of the brain and its models before comparing where, when, and how these features are processed. Elsevier 2021-09-10 /pmc/articles/PMC8515012/ /pubmed/34693374 http://dx.doi.org/10.1016/j.patter.2021.100348 Text en © 2021 The Authors https://creativecommons.org/licenses/by/4.0/This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Daube, Christoph
Xu, Tian
Zhan, Jiayu
Webb, Andrew
Ince, Robin A.A.
Garrod, Oliver G.B.
Schyns, Philippe G.
Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity
title Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity
title_full Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity
title_fullStr Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity
title_full_unstemmed Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity
title_short Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity
title_sort grounding deep neural network predictions of human categorization behavior in understandable functional features: the case of face identity
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8515012/
https://www.ncbi.nlm.nih.gov/pubmed/34693374
http://dx.doi.org/10.1016/j.patter.2021.100348
work_keys_str_mv AT daubechristoph groundingdeepneuralnetworkpredictionsofhumancategorizationbehaviorinunderstandablefunctionalfeaturesthecaseoffaceidentity
AT xutian groundingdeepneuralnetworkpredictionsofhumancategorizationbehaviorinunderstandablefunctionalfeaturesthecaseoffaceidentity
AT zhanjiayu groundingdeepneuralnetworkpredictionsofhumancategorizationbehaviorinunderstandablefunctionalfeaturesthecaseoffaceidentity
AT webbandrew groundingdeepneuralnetworkpredictionsofhumancategorizationbehaviorinunderstandablefunctionalfeaturesthecaseoffaceidentity
AT incerobinaa groundingdeepneuralnetworkpredictionsofhumancategorizationbehaviorinunderstandablefunctionalfeaturesthecaseoffaceidentity
AT garrodolivergb groundingdeepneuralnetworkpredictionsofhumancategorizationbehaviorinunderstandablefunctionalfeaturesthecaseoffaceidentity
AT schynsphilippeg groundingdeepneuralnetworkpredictionsofhumancategorizationbehaviorinunderstandablefunctionalfeaturesthecaseoffaceidentity