Cargando…

A deep neural network model of the primate superior colliculus for emotion recognition

Although sensory processing is pivotal to nearly every theory of emotion, the evaluation of the visual input as ‘emotional’ (e.g. a smile as signalling happiness) has been traditionally assumed to take place in supramodal ‘limbic’ brain regions. Accordingly, subcortical structures of ancient evoluti...

Descripción completa

Detalles Bibliográficos
Autores principales: Méndez, Carlos Andrés, Celeghin, Alessia, Diano, Matteo, Orsenigo, Davide, Ocak, Brian, Tamietto, Marco
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Royal Society 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9489290/
https://www.ncbi.nlm.nih.gov/pubmed/36126660
http://dx.doi.org/10.1098/rstb.2021.0512
_version_ 1784792845134069760
author Méndez, Carlos Andrés
Celeghin, Alessia
Diano, Matteo
Orsenigo, Davide
Ocak, Brian
Tamietto, Marco
author_facet Méndez, Carlos Andrés
Celeghin, Alessia
Diano, Matteo
Orsenigo, Davide
Ocak, Brian
Tamietto, Marco
author_sort Méndez, Carlos Andrés
collection PubMed
description Although sensory processing is pivotal to nearly every theory of emotion, the evaluation of the visual input as ‘emotional’ (e.g. a smile as signalling happiness) has been traditionally assumed to take place in supramodal ‘limbic’ brain regions. Accordingly, subcortical structures of ancient evolutionary origin that receive direct input from the retina, such as the superior colliculus (SC), are traditionally conceptualized as passive relay centres. However, mounting evidence suggests that the SC is endowed with the necessary infrastructure and computational capabilities for the innate recognition and initial categorization of emotionally salient features from retinal information. Here, we built a neurobiologically inspired convolutional deep neural network (DNN) model that approximates physiological, anatomical and connectional properties of the retino-collicular circuit. This enabled us to characterize and isolate the initial computations and discriminations that the DNN model of the SC can perform on facial expressions, based uniquely on the information it directly receives from the virtual retina. Trained to discriminate facial expressions of basic emotions, our model matches human error patterns and above chance, yet suboptimal, classification accuracy analogous to that reported in patients with V1 damage, who rely on retino-collicular pathways for non-conscious vision of emotional attributes. When presented with gratings of different spatial frequencies and orientations never ‘seen’ before, the SC model exhibits spontaneous tuning to low spatial frequencies and reduced orientation discrimination, as can be expected from the prevalence of the magnocellular (M) over parvocellular (P) projections. Likewise, face manipulation that biases processing towards the M or P pathway affects expression recognition in the SC model accordingly, an effect that dovetails with variations of activity in the human SC purposely measured with ultra-high field functional magnetic resonance imaging. Lastly, the DNN generates saliency maps and extracts visual features, demonstrating that certain face parts, like the mouth or the eyes, provide higher discriminative information than other parts as a function of emotional expressions like happiness and sadness. The present findings support the contention that the SC possesses the necessary infrastructure to analyse the visual features that define facial emotional stimuli also without additional processing stages in the visual cortex or in ‘limbic’ areas. This article is part of the theme issue ‘Cracking the laugh code: laughter through the lens of biology, psychology and neuroscience’.
format Online
Article
Text
id pubmed-9489290
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher The Royal Society
record_format MEDLINE/PubMed
spelling pubmed-94892902022-10-03 A deep neural network model of the primate superior colliculus for emotion recognition Méndez, Carlos Andrés Celeghin, Alessia Diano, Matteo Orsenigo, Davide Ocak, Brian Tamietto, Marco Philos Trans R Soc Lond B Biol Sci Articles Although sensory processing is pivotal to nearly every theory of emotion, the evaluation of the visual input as ‘emotional’ (e.g. a smile as signalling happiness) has been traditionally assumed to take place in supramodal ‘limbic’ brain regions. Accordingly, subcortical structures of ancient evolutionary origin that receive direct input from the retina, such as the superior colliculus (SC), are traditionally conceptualized as passive relay centres. However, mounting evidence suggests that the SC is endowed with the necessary infrastructure and computational capabilities for the innate recognition and initial categorization of emotionally salient features from retinal information. Here, we built a neurobiologically inspired convolutional deep neural network (DNN) model that approximates physiological, anatomical and connectional properties of the retino-collicular circuit. This enabled us to characterize and isolate the initial computations and discriminations that the DNN model of the SC can perform on facial expressions, based uniquely on the information it directly receives from the virtual retina. Trained to discriminate facial expressions of basic emotions, our model matches human error patterns and above chance, yet suboptimal, classification accuracy analogous to that reported in patients with V1 damage, who rely on retino-collicular pathways for non-conscious vision of emotional attributes. When presented with gratings of different spatial frequencies and orientations never ‘seen’ before, the SC model exhibits spontaneous tuning to low spatial frequencies and reduced orientation discrimination, as can be expected from the prevalence of the magnocellular (M) over parvocellular (P) projections. Likewise, face manipulation that biases processing towards the M or P pathway affects expression recognition in the SC model accordingly, an effect that dovetails with variations of activity in the human SC purposely measured with ultra-high field functional magnetic resonance imaging. Lastly, the DNN generates saliency maps and extracts visual features, demonstrating that certain face parts, like the mouth or the eyes, provide higher discriminative information than other parts as a function of emotional expressions like happiness and sadness. The present findings support the contention that the SC possesses the necessary infrastructure to analyse the visual features that define facial emotional stimuli also without additional processing stages in the visual cortex or in ‘limbic’ areas. This article is part of the theme issue ‘Cracking the laugh code: laughter through the lens of biology, psychology and neuroscience’. The Royal Society 2022-11-07 2022-09-21 /pmc/articles/PMC9489290/ /pubmed/36126660 http://dx.doi.org/10.1098/rstb.2021.0512 Text en © 2022 The Authors. https://creativecommons.org/licenses/by/4.0/Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, provided the original author and source are credited.
spellingShingle Articles
Méndez, Carlos Andrés
Celeghin, Alessia
Diano, Matteo
Orsenigo, Davide
Ocak, Brian
Tamietto, Marco
A deep neural network model of the primate superior colliculus for emotion recognition
title A deep neural network model of the primate superior colliculus for emotion recognition
title_full A deep neural network model of the primate superior colliculus for emotion recognition
title_fullStr A deep neural network model of the primate superior colliculus for emotion recognition
title_full_unstemmed A deep neural network model of the primate superior colliculus for emotion recognition
title_short A deep neural network model of the primate superior colliculus for emotion recognition
title_sort deep neural network model of the primate superior colliculus for emotion recognition
topic Articles
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9489290/
https://www.ncbi.nlm.nih.gov/pubmed/36126660
http://dx.doi.org/10.1098/rstb.2021.0512
work_keys_str_mv AT mendezcarlosandres adeepneuralnetworkmodeloftheprimatesuperiorcolliculusforemotionrecognition
AT celeghinalessia adeepneuralnetworkmodeloftheprimatesuperiorcolliculusforemotionrecognition
AT dianomatteo adeepneuralnetworkmodeloftheprimatesuperiorcolliculusforemotionrecognition
AT orsenigodavide adeepneuralnetworkmodeloftheprimatesuperiorcolliculusforemotionrecognition
AT ocakbrian adeepneuralnetworkmodeloftheprimatesuperiorcolliculusforemotionrecognition
AT tamiettomarco adeepneuralnetworkmodeloftheprimatesuperiorcolliculusforemotionrecognition
AT mendezcarlosandres deepneuralnetworkmodeloftheprimatesuperiorcolliculusforemotionrecognition
AT celeghinalessia deepneuralnetworkmodeloftheprimatesuperiorcolliculusforemotionrecognition
AT dianomatteo deepneuralnetworkmodeloftheprimatesuperiorcolliculusforemotionrecognition
AT orsenigodavide deepneuralnetworkmodeloftheprimatesuperiorcolliculusforemotionrecognition
AT ocakbrian deepneuralnetworkmodeloftheprimatesuperiorcolliculusforemotionrecognition
AT tamiettomarco deepneuralnetworkmodeloftheprimatesuperiorcolliculusforemotionrecognition