Cargando…

Decoding Face Information in Time, Frequency and Space from Direct Intracranial Recordings of the Human Brain

Faces are processed by a neural system with distributed anatomical components, but the roles of these components remain unclear. A dominant theory of face perception postulates independent representations of invariant aspects of faces (e.g., identity) in ventral temporal cortex including the fusifor...

Descripción completa

Detalles Bibliográficos
Autores principales: Tsuchiya, Naotsugu, Kawasaki, Hiroto, Oya, Hiroyuki, Howard, Matthew A., Adolphs, Ralph
Formato: Texto
Lenguaje:English
Publicado: Public Library of Science 2008
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2588533/
https://www.ncbi.nlm.nih.gov/pubmed/19065268
http://dx.doi.org/10.1371/journal.pone.0003892
_version_ 1782160947238928384
author Tsuchiya, Naotsugu
Kawasaki, Hiroto
Oya, Hiroyuki
Howard, Matthew A.
Adolphs, Ralph
author_facet Tsuchiya, Naotsugu
Kawasaki, Hiroto
Oya, Hiroyuki
Howard, Matthew A.
Adolphs, Ralph
author_sort Tsuchiya, Naotsugu
collection PubMed
description Faces are processed by a neural system with distributed anatomical components, but the roles of these components remain unclear. A dominant theory of face perception postulates independent representations of invariant aspects of faces (e.g., identity) in ventral temporal cortex including the fusiform gyrus, and changeable aspects of faces (e.g., emotion) in lateral temporal cortex including the superior temporal sulcus. Here we recorded neuronal activity directly from the cortical surface in 9 neurosurgical subjects undergoing epilepsy monitoring while they viewed static and dynamic facial expressions. Applying novel decoding analyses to the power spectrogram of electrocorticograms (ECoG) from over 100 contacts in ventral and lateral temporal cortex, we found better representation of both invariant and changeable aspects of faces in ventral than lateral temporal cortex. Critical information for discriminating faces from geometric patterns was carried by power modulations between 50 to 150 Hz. For both static and dynamic face stimuli, we obtained a higher decoding performance in ventral than lateral temporal cortex. For discriminating fearful from happy expressions, critical information was carried by power modulation between 60–150 Hz and below 30 Hz, and again better decoded in ventral than lateral temporal cortex. Task-relevant attention improved decoding accuracy more than10% across a wide frequency range in ventral but not at all in lateral temporal cortex. Spatial searchlight decoding showed that decoding performance was highest around the middle fusiform gyrus. Finally, we found that the right hemisphere, in general, showed superior decoding to the left hemisphere. Taken together, our results challenge the dominant model for independent face representation of invariant and changeable aspects: information about both face attributes was better decoded from a single region in the middle fusiform gyrus.
format Text
id pubmed-2588533
institution National Center for Biotechnology Information
language English
publishDate 2008
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-25885332008-12-09 Decoding Face Information in Time, Frequency and Space from Direct Intracranial Recordings of the Human Brain Tsuchiya, Naotsugu Kawasaki, Hiroto Oya, Hiroyuki Howard, Matthew A. Adolphs, Ralph PLoS One Research Article Faces are processed by a neural system with distributed anatomical components, but the roles of these components remain unclear. A dominant theory of face perception postulates independent representations of invariant aspects of faces (e.g., identity) in ventral temporal cortex including the fusiform gyrus, and changeable aspects of faces (e.g., emotion) in lateral temporal cortex including the superior temporal sulcus. Here we recorded neuronal activity directly from the cortical surface in 9 neurosurgical subjects undergoing epilepsy monitoring while they viewed static and dynamic facial expressions. Applying novel decoding analyses to the power spectrogram of electrocorticograms (ECoG) from over 100 contacts in ventral and lateral temporal cortex, we found better representation of both invariant and changeable aspects of faces in ventral than lateral temporal cortex. Critical information for discriminating faces from geometric patterns was carried by power modulations between 50 to 150 Hz. For both static and dynamic face stimuli, we obtained a higher decoding performance in ventral than lateral temporal cortex. For discriminating fearful from happy expressions, critical information was carried by power modulation between 60–150 Hz and below 30 Hz, and again better decoded in ventral than lateral temporal cortex. Task-relevant attention improved decoding accuracy more than10% across a wide frequency range in ventral but not at all in lateral temporal cortex. Spatial searchlight decoding showed that decoding performance was highest around the middle fusiform gyrus. Finally, we found that the right hemisphere, in general, showed superior decoding to the left hemisphere. Taken together, our results challenge the dominant model for independent face representation of invariant and changeable aspects: information about both face attributes was better decoded from a single region in the middle fusiform gyrus. Public Library of Science 2008-12-09 /pmc/articles/PMC2588533/ /pubmed/19065268 http://dx.doi.org/10.1371/journal.pone.0003892 Text en Tsuchiya et al. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
Tsuchiya, Naotsugu
Kawasaki, Hiroto
Oya, Hiroyuki
Howard, Matthew A.
Adolphs, Ralph
Decoding Face Information in Time, Frequency and Space from Direct Intracranial Recordings of the Human Brain
title Decoding Face Information in Time, Frequency and Space from Direct Intracranial Recordings of the Human Brain
title_full Decoding Face Information in Time, Frequency and Space from Direct Intracranial Recordings of the Human Brain
title_fullStr Decoding Face Information in Time, Frequency and Space from Direct Intracranial Recordings of the Human Brain
title_full_unstemmed Decoding Face Information in Time, Frequency and Space from Direct Intracranial Recordings of the Human Brain
title_short Decoding Face Information in Time, Frequency and Space from Direct Intracranial Recordings of the Human Brain
title_sort decoding face information in time, frequency and space from direct intracranial recordings of the human brain
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2588533/
https://www.ncbi.nlm.nih.gov/pubmed/19065268
http://dx.doi.org/10.1371/journal.pone.0003892
work_keys_str_mv AT tsuchiyanaotsugu decodingfaceinformationintimefrequencyandspacefromdirectintracranialrecordingsofthehumanbrain
AT kawasakihiroto decodingfaceinformationintimefrequencyandspacefromdirectintracranialrecordingsofthehumanbrain
AT oyahiroyuki decodingfaceinformationintimefrequencyandspacefromdirectintracranialrecordingsofthehumanbrain
AT howardmatthewa decodingfaceinformationintimefrequencyandspacefromdirectintracranialrecordingsofthehumanbrain
AT adolphsralph decodingfaceinformationintimefrequencyandspacefromdirectintracranialrecordingsofthehumanbrain