Cargando…

Hand-Selective Visual Regions Represent How to Grasp 3D Tools: Brain Decoding during Real Actions

Most neuroimaging experiments that investigate how tools and their actions are represented in the brain use visual paradigms where tools or hands are displayed as 2D images and no real movements are performed. These studies discovered selective visual responses in occipitotemporal and parietal corti...

Descripción completa

Detalles Bibliográficos
Autores principales: Knights, Ethan, Mansfield, Courtney, Tonin, Diana, Saada, Janak, Smith, Fraser W., Rossit, Stéphanie
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Society for Neuroscience 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8211542/
https://www.ncbi.nlm.nih.gov/pubmed/33972399
http://dx.doi.org/10.1523/JNEUROSCI.0083-21.2021
_version_ 1783709487854518272
author Knights, Ethan
Mansfield, Courtney
Tonin, Diana
Saada, Janak
Smith, Fraser W.
Rossit, Stéphanie
author_facet Knights, Ethan
Mansfield, Courtney
Tonin, Diana
Saada, Janak
Smith, Fraser W.
Rossit, Stéphanie
author_sort Knights, Ethan
collection PubMed
description Most neuroimaging experiments that investigate how tools and their actions are represented in the brain use visual paradigms where tools or hands are displayed as 2D images and no real movements are performed. These studies discovered selective visual responses in occipitotemporal and parietal cortices for viewing pictures of hands or tools, which are assumed to reflect action processing, but this has rarely been directly investigated. Here, we examined the responses of independently visually defined category-selective brain areas when participants grasped 3D tools (N = 20; 9 females). Using real-action fMRI and multivoxel pattern analysis, we found that grasp typicality representations (i.e., whether a tool is grasped appropriately for use) were decodable from hand-selective areas in occipitotemporal and parietal cortices, but not from tool-, object-, or body-selective areas, even if partially overlapping. Importantly, these effects were exclusive for actions with tools, but not for biomechanically matched actions with control nontools. In addition, grasp typicality decoding was significantly higher in hand than tool-selective parietal regions. Notably, grasp typicality representations were automatically evoked even when there was no requirement for tool use and participants were naive to object category (tool vs nontools). Finding a specificity for typical tool grasping in hand-selective, rather than tool-selective, regions challenges the long-standing assumption that activation for viewing tool images reflects sensorimotor processing linked to tool manipulation. Instead, our results show that typicality representations for tool grasping are automatically evoked in visual regions specialized for representing the human hand, the primary tool of the brain for interacting with the world.
format Online
Article
Text
id pubmed-8211542
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Society for Neuroscience
record_format MEDLINE/PubMed
spelling pubmed-82115422021-06-21 Hand-Selective Visual Regions Represent How to Grasp 3D Tools: Brain Decoding during Real Actions Knights, Ethan Mansfield, Courtney Tonin, Diana Saada, Janak Smith, Fraser W. Rossit, Stéphanie J Neurosci Research Articles Most neuroimaging experiments that investigate how tools and their actions are represented in the brain use visual paradigms where tools or hands are displayed as 2D images and no real movements are performed. These studies discovered selective visual responses in occipitotemporal and parietal cortices for viewing pictures of hands or tools, which are assumed to reflect action processing, but this has rarely been directly investigated. Here, we examined the responses of independently visually defined category-selective brain areas when participants grasped 3D tools (N = 20; 9 females). Using real-action fMRI and multivoxel pattern analysis, we found that grasp typicality representations (i.e., whether a tool is grasped appropriately for use) were decodable from hand-selective areas in occipitotemporal and parietal cortices, but not from tool-, object-, or body-selective areas, even if partially overlapping. Importantly, these effects were exclusive for actions with tools, but not for biomechanically matched actions with control nontools. In addition, grasp typicality decoding was significantly higher in hand than tool-selective parietal regions. Notably, grasp typicality representations were automatically evoked even when there was no requirement for tool use and participants were naive to object category (tool vs nontools). Finding a specificity for typical tool grasping in hand-selective, rather than tool-selective, regions challenges the long-standing assumption that activation for viewing tool images reflects sensorimotor processing linked to tool manipulation. Instead, our results show that typicality representations for tool grasping are automatically evoked in visual regions specialized for representing the human hand, the primary tool of the brain for interacting with the world. Society for Neuroscience 2021-06-16 /pmc/articles/PMC8211542/ /pubmed/33972399 http://dx.doi.org/10.1523/JNEUROSCI.0083-21.2021 Text en Copyright © 2021 Knights et al. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.
spellingShingle Research Articles
Knights, Ethan
Mansfield, Courtney
Tonin, Diana
Saada, Janak
Smith, Fraser W.
Rossit, Stéphanie
Hand-Selective Visual Regions Represent How to Grasp 3D Tools: Brain Decoding during Real Actions
title Hand-Selective Visual Regions Represent How to Grasp 3D Tools: Brain Decoding during Real Actions
title_full Hand-Selective Visual Regions Represent How to Grasp 3D Tools: Brain Decoding during Real Actions
title_fullStr Hand-Selective Visual Regions Represent How to Grasp 3D Tools: Brain Decoding during Real Actions
title_full_unstemmed Hand-Selective Visual Regions Represent How to Grasp 3D Tools: Brain Decoding during Real Actions
title_short Hand-Selective Visual Regions Represent How to Grasp 3D Tools: Brain Decoding during Real Actions
title_sort hand-selective visual regions represent how to grasp 3d tools: brain decoding during real actions
topic Research Articles
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8211542/
https://www.ncbi.nlm.nih.gov/pubmed/33972399
http://dx.doi.org/10.1523/JNEUROSCI.0083-21.2021
work_keys_str_mv AT knightsethan handselectivevisualregionsrepresenthowtograsp3dtoolsbraindecodingduringrealactions
AT mansfieldcourtney handselectivevisualregionsrepresenthowtograsp3dtoolsbraindecodingduringrealactions
AT tonindiana handselectivevisualregionsrepresenthowtograsp3dtoolsbraindecodingduringrealactions
AT saadajanak handselectivevisualregionsrepresenthowtograsp3dtoolsbraindecodingduringrealactions
AT smithfraserw handselectivevisualregionsrepresenthowtograsp3dtoolsbraindecodingduringrealactions
AT rossitstephanie handselectivevisualregionsrepresenthowtograsp3dtoolsbraindecodingduringrealactions