Cargando…
Decoding the Locus of Covert Visuospatial Attention from EEG Signals
Visuospatial attention can be deployed to different locations in space independently of ocular fixation, and studies have shown that event-related potential (ERP) components can effectively index whether such covert visuospatial attention is deployed to the left or right visual field. However, it is...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2016
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4986977/ https://www.ncbi.nlm.nih.gov/pubmed/27529476 http://dx.doi.org/10.1371/journal.pone.0160304 |
_version_ | 1782448250948681728 |
---|---|
author | Thiery, Thomas Lajnef, Tarek Jerbi, Karim Arguin, Martin Aubin, Mercedes Jolicoeur, Pierre |
author_facet | Thiery, Thomas Lajnef, Tarek Jerbi, Karim Arguin, Martin Aubin, Mercedes Jolicoeur, Pierre |
author_sort | Thiery, Thomas |
collection | PubMed |
description | Visuospatial attention can be deployed to different locations in space independently of ocular fixation, and studies have shown that event-related potential (ERP) components can effectively index whether such covert visuospatial attention is deployed to the left or right visual field. However, it is not clear whether we may obtain a more precise spatial localization of the focus of attention based on the EEG signals during central fixation. In this study, we used a modified Posner cueing task with an endogenous cue to determine the degree to which information in the EEG signal can be used to track visual spatial attention in presentation sequences lasting 200 ms. We used a machine learning classification method to evaluate how well EEG signals discriminate between four different locations of the focus of attention. We then used a multi-class support vector machine (SVM) and a leave-one-out cross-validation framework to evaluate the decoding accuracy (DA). We found that ERP-based features from occipital and parietal regions showed a statistically significant valid prediction of the location of the focus of visuospatial attention (DA = 57%, p < .001, chance-level 25%). The mean distance between the predicted and the true focus of attention was 0.62 letter positions, which represented a mean error of 0.55 degrees of visual angle. In addition, ERP responses also successfully predicted whether spatial attention was allocated or not to a given location with an accuracy of 79% (p < .001). These findings are discussed in terms of their implications for visuospatial attention decoding and future paths for research are proposed. |
format | Online Article Text |
id | pubmed-4986977 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2016 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-49869772016-08-29 Decoding the Locus of Covert Visuospatial Attention from EEG Signals Thiery, Thomas Lajnef, Tarek Jerbi, Karim Arguin, Martin Aubin, Mercedes Jolicoeur, Pierre PLoS One Research Article Visuospatial attention can be deployed to different locations in space independently of ocular fixation, and studies have shown that event-related potential (ERP) components can effectively index whether such covert visuospatial attention is deployed to the left or right visual field. However, it is not clear whether we may obtain a more precise spatial localization of the focus of attention based on the EEG signals during central fixation. In this study, we used a modified Posner cueing task with an endogenous cue to determine the degree to which information in the EEG signal can be used to track visual spatial attention in presentation sequences lasting 200 ms. We used a machine learning classification method to evaluate how well EEG signals discriminate between four different locations of the focus of attention. We then used a multi-class support vector machine (SVM) and a leave-one-out cross-validation framework to evaluate the decoding accuracy (DA). We found that ERP-based features from occipital and parietal regions showed a statistically significant valid prediction of the location of the focus of visuospatial attention (DA = 57%, p < .001, chance-level 25%). The mean distance between the predicted and the true focus of attention was 0.62 letter positions, which represented a mean error of 0.55 degrees of visual angle. In addition, ERP responses also successfully predicted whether spatial attention was allocated or not to a given location with an accuracy of 79% (p < .001). These findings are discussed in terms of their implications for visuospatial attention decoding and future paths for research are proposed. Public Library of Science 2016-08-16 /pmc/articles/PMC4986977/ /pubmed/27529476 http://dx.doi.org/10.1371/journal.pone.0160304 Text en © 2016 Thiery et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Thiery, Thomas Lajnef, Tarek Jerbi, Karim Arguin, Martin Aubin, Mercedes Jolicoeur, Pierre Decoding the Locus of Covert Visuospatial Attention from EEG Signals |
title | Decoding the Locus of Covert Visuospatial Attention from EEG Signals |
title_full | Decoding the Locus of Covert Visuospatial Attention from EEG Signals |
title_fullStr | Decoding the Locus of Covert Visuospatial Attention from EEG Signals |
title_full_unstemmed | Decoding the Locus of Covert Visuospatial Attention from EEG Signals |
title_short | Decoding the Locus of Covert Visuospatial Attention from EEG Signals |
title_sort | decoding the locus of covert visuospatial attention from eeg signals |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4986977/ https://www.ncbi.nlm.nih.gov/pubmed/27529476 http://dx.doi.org/10.1371/journal.pone.0160304 |
work_keys_str_mv | AT thierythomas decodingthelocusofcovertvisuospatialattentionfromeegsignals AT lajneftarek decodingthelocusofcovertvisuospatialattentionfromeegsignals AT jerbikarim decodingthelocusofcovertvisuospatialattentionfromeegsignals AT arguinmartin decodingthelocusofcovertvisuospatialattentionfromeegsignals AT aubinmercedes decodingthelocusofcovertvisuospatialattentionfromeegsignals AT jolicoeurpierre decodingthelocusofcovertvisuospatialattentionfromeegsignals |