Cargando…
Closing the gap between single-unit and neural population codes: Insights from deep learning in face recognition
Single-unit responses and population codes differ in the “read-out” information they provide about high-level visual representations. Diverging local and global read-outs can be difficult to reconcile with in vivo methods. To bridge this gap, we studied the relationship between single-unit and ensem...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
The Association for Research in Vision and Ophthalmology
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8363775/ https://www.ncbi.nlm.nih.gov/pubmed/34379084 http://dx.doi.org/10.1167/jov.21.8.15 |
_version_ | 1783738409470132224 |
---|---|
author | Parde, Connor J. Colón, Y. Ivette Hill, Matthew Q. Castillo, Carlos D. Dhar, Prithviraj O’Toole, Alice J. |
author_facet | Parde, Connor J. Colón, Y. Ivette Hill, Matthew Q. Castillo, Carlos D. Dhar, Prithviraj O’Toole, Alice J. |
author_sort | Parde, Connor J. |
collection | PubMed |
description | Single-unit responses and population codes differ in the “read-out” information they provide about high-level visual representations. Diverging local and global read-outs can be difficult to reconcile with in vivo methods. To bridge this gap, we studied the relationship between single-unit and ensemble codes for identity, gender, and viewpoint, using a deep convolutional neural network (DCNN) trained for face recognition. Analogous to the primate visual system, DCNNs develop representations that generalize over image variation, while retaining subject (e.g., gender) and image (e.g., viewpoint) information. At the unit level, we measured the number of single units needed to predict attributes (identity, gender, viewpoint) and the predictive value of individual units for each attribute. Identification was remarkably accurate using random samples of only 3% of the network's output units, and all units had substantial identity-predicting power. Cross-unit responses were minimally correlated, indicating that single units code non-redundant identity cues. Gender and viewpoint classification required large-scale pooling of units—individual units had weak predictive power. At the ensemble level, principal component analysis of face representations showed that identity, gender, and viewpoint separated into high-dimensional subspaces, ordered by explained variance. Unit-based directions in the representational space were compared with the directions associated with the attributes. Identity, gender, and viewpoint contributed to all individual unit responses, undercutting a neural tuning analogy. Instead, single-unit responses carry superimposed, distributed codes for face identity, gender, and viewpoint. This undermines confidence in the interpretation of neural representations from unit response profiles for both DCNNs and, by analogy, high-level vision. |
format | Online Article Text |
id | pubmed-8363775 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | The Association for Research in Vision and Ophthalmology |
record_format | MEDLINE/PubMed |
spelling | pubmed-83637752021-08-24 Closing the gap between single-unit and neural population codes: Insights from deep learning in face recognition Parde, Connor J. Colón, Y. Ivette Hill, Matthew Q. Castillo, Carlos D. Dhar, Prithviraj O’Toole, Alice J. J Vis Article Single-unit responses and population codes differ in the “read-out” information they provide about high-level visual representations. Diverging local and global read-outs can be difficult to reconcile with in vivo methods. To bridge this gap, we studied the relationship between single-unit and ensemble codes for identity, gender, and viewpoint, using a deep convolutional neural network (DCNN) trained for face recognition. Analogous to the primate visual system, DCNNs develop representations that generalize over image variation, while retaining subject (e.g., gender) and image (e.g., viewpoint) information. At the unit level, we measured the number of single units needed to predict attributes (identity, gender, viewpoint) and the predictive value of individual units for each attribute. Identification was remarkably accurate using random samples of only 3% of the network's output units, and all units had substantial identity-predicting power. Cross-unit responses were minimally correlated, indicating that single units code non-redundant identity cues. Gender and viewpoint classification required large-scale pooling of units—individual units had weak predictive power. At the ensemble level, principal component analysis of face representations showed that identity, gender, and viewpoint separated into high-dimensional subspaces, ordered by explained variance. Unit-based directions in the representational space were compared with the directions associated with the attributes. Identity, gender, and viewpoint contributed to all individual unit responses, undercutting a neural tuning analogy. Instead, single-unit responses carry superimposed, distributed codes for face identity, gender, and viewpoint. This undermines confidence in the interpretation of neural representations from unit response profiles for both DCNNs and, by analogy, high-level vision. The Association for Research in Vision and Ophthalmology 2021-08-11 /pmc/articles/PMC8363775/ /pubmed/34379084 http://dx.doi.org/10.1167/jov.21.8.15 Text en Copyright 2021, The Authors https://creativecommons.org/licenses/by-nc-nd/4.0/This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. |
spellingShingle | Article Parde, Connor J. Colón, Y. Ivette Hill, Matthew Q. Castillo, Carlos D. Dhar, Prithviraj O’Toole, Alice J. Closing the gap between single-unit and neural population codes: Insights from deep learning in face recognition |
title | Closing the gap between single-unit and neural population codes: Insights from deep learning in face recognition |
title_full | Closing the gap between single-unit and neural population codes: Insights from deep learning in face recognition |
title_fullStr | Closing the gap between single-unit and neural population codes: Insights from deep learning in face recognition |
title_full_unstemmed | Closing the gap between single-unit and neural population codes: Insights from deep learning in face recognition |
title_short | Closing the gap between single-unit and neural population codes: Insights from deep learning in face recognition |
title_sort | closing the gap between single-unit and neural population codes: insights from deep learning in face recognition |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8363775/ https://www.ncbi.nlm.nih.gov/pubmed/34379084 http://dx.doi.org/10.1167/jov.21.8.15 |
work_keys_str_mv | AT pardeconnorj closingthegapbetweensingleunitandneuralpopulationcodesinsightsfromdeeplearninginfacerecognition AT colonyivette closingthegapbetweensingleunitandneuralpopulationcodesinsightsfromdeeplearninginfacerecognition AT hillmatthewq closingthegapbetweensingleunitandneuralpopulationcodesinsightsfromdeeplearninginfacerecognition AT castillocarlosd closingthegapbetweensingleunitandneuralpopulationcodesinsightsfromdeeplearninginfacerecognition AT dharprithviraj closingthegapbetweensingleunitandneuralpopulationcodesinsightsfromdeeplearninginfacerecognition AT otoolealicej closingthegapbetweensingleunitandneuralpopulationcodesinsightsfromdeeplearninginfacerecognition |