Cargando…
Implementation-Independent Representation for Deep Convolutional Neural Networks and Humans in Processing Faces
Deep convolutional neural networks (DCNN) nowadays can match human performance in challenging complex tasks, but it remains unknown whether DCNNs achieve human-like performance through human-like processes. Here we applied a reverse-correlation method to make explicit representations of DCNNs and hu...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7870475/ https://www.ncbi.nlm.nih.gov/pubmed/33574746 http://dx.doi.org/10.3389/fncom.2020.601314 |
_version_ | 1783648824808439808 |
---|---|
author | Song, Yiying Qu, Yukun Xu, Shan Liu, Jia |
author_facet | Song, Yiying Qu, Yukun Xu, Shan Liu, Jia |
author_sort | Song, Yiying |
collection | PubMed |
description | Deep convolutional neural networks (DCNN) nowadays can match human performance in challenging complex tasks, but it remains unknown whether DCNNs achieve human-like performance through human-like processes. Here we applied a reverse-correlation method to make explicit representations of DCNNs and humans when performing face gender classification. We found that humans and a typical DCNN, VGG-Face, used similar critical information for this task, which mainly resided at low spatial frequencies. Importantly, the prior task experience, which the VGG-Face was pre-trained to process faces at the subordinate level (i.e., identification) as humans do, seemed necessary for such representational similarity, because AlexNet, a DCNN pre-trained to process objects at the basic level (i.e., categorization), succeeded in gender classification but relied on a completely different representation. In sum, although DCNNs and humans rely on different sets of hardware to process faces, they can use a similar and implementation-independent representation to achieve the same computation goal. |
format | Online Article Text |
id | pubmed-7870475 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-78704752021-02-10 Implementation-Independent Representation for Deep Convolutional Neural Networks and Humans in Processing Faces Song, Yiying Qu, Yukun Xu, Shan Liu, Jia Front Comput Neurosci Neuroscience Deep convolutional neural networks (DCNN) nowadays can match human performance in challenging complex tasks, but it remains unknown whether DCNNs achieve human-like performance through human-like processes. Here we applied a reverse-correlation method to make explicit representations of DCNNs and humans when performing face gender classification. We found that humans and a typical DCNN, VGG-Face, used similar critical information for this task, which mainly resided at low spatial frequencies. Importantly, the prior task experience, which the VGG-Face was pre-trained to process faces at the subordinate level (i.e., identification) as humans do, seemed necessary for such representational similarity, because AlexNet, a DCNN pre-trained to process objects at the basic level (i.e., categorization), succeeded in gender classification but relied on a completely different representation. In sum, although DCNNs and humans rely on different sets of hardware to process faces, they can use a similar and implementation-independent representation to achieve the same computation goal. Frontiers Media S.A. 2021-01-26 /pmc/articles/PMC7870475/ /pubmed/33574746 http://dx.doi.org/10.3389/fncom.2020.601314 Text en Copyright © 2021 Song, Qu, Xu and Liu. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Song, Yiying Qu, Yukun Xu, Shan Liu, Jia Implementation-Independent Representation for Deep Convolutional Neural Networks and Humans in Processing Faces |
title | Implementation-Independent Representation for Deep Convolutional Neural Networks and Humans in Processing Faces |
title_full | Implementation-Independent Representation for Deep Convolutional Neural Networks and Humans in Processing Faces |
title_fullStr | Implementation-Independent Representation for Deep Convolutional Neural Networks and Humans in Processing Faces |
title_full_unstemmed | Implementation-Independent Representation for Deep Convolutional Neural Networks and Humans in Processing Faces |
title_short | Implementation-Independent Representation for Deep Convolutional Neural Networks and Humans in Processing Faces |
title_sort | implementation-independent representation for deep convolutional neural networks and humans in processing faces |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7870475/ https://www.ncbi.nlm.nih.gov/pubmed/33574746 http://dx.doi.org/10.3389/fncom.2020.601314 |
work_keys_str_mv | AT songyiying implementationindependentrepresentationfordeepconvolutionalneuralnetworksandhumansinprocessingfaces AT quyukun implementationindependentrepresentationfordeepconvolutionalneuralnetworksandhumansinprocessingfaces AT xushan implementationindependentrepresentationfordeepconvolutionalneuralnetworksandhumansinprocessingfaces AT liujia implementationindependentrepresentationfordeepconvolutionalneuralnetworksandhumansinprocessingfaces |