Cargando…

Evaluation of the reliability and validity of computerized tests of attention

Different aspects of attention can be assessed through psychological tests to identify stable individual or group differences as well as alterations after interventions. Aiming for a wide applicability of attentional assessments, Psychology Experiment Building Language (PEBL) is an open-source softw...

Descripción completa

Detalles Bibliográficos
Autores principales: Langner, Robert, Scharnowski, Frank, Ionta, Silvio, G. Salmon, Carlos E., Piper, Brian J., Pamplona, Gustavo S. P.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9882756/
https://www.ncbi.nlm.nih.gov/pubmed/36706136
http://dx.doi.org/10.1371/journal.pone.0281196
_version_ 1784879363008757760
author Langner, Robert
Scharnowski, Frank
Ionta, Silvio
G. Salmon, Carlos E.
Piper, Brian J.
Pamplona, Gustavo S. P.
author_facet Langner, Robert
Scharnowski, Frank
Ionta, Silvio
G. Salmon, Carlos E.
Piper, Brian J.
Pamplona, Gustavo S. P.
author_sort Langner, Robert
collection PubMed
description Different aspects of attention can be assessed through psychological tests to identify stable individual or group differences as well as alterations after interventions. Aiming for a wide applicability of attentional assessments, Psychology Experiment Building Language (PEBL) is an open-source software system for designing and running computerized tasks that tax various attentional functions. Here, we evaluated the reliability and validity of computerized attention tasks as provided with the PEBL package: Continuous Performance Task (CPT), Switcher task, Psychomotor Vigilance Task (PVT), Mental Rotation task, and Attentional Network Test. For all tasks, we evaluated test-retest reliability using the intraclass correlation coefficient (ICC), as well as internal consistency through within-test correlations and split-half ICC. Across tasks, response time scores showed adequate reliability, whereas scores of performance accuracy, variability, and deterioration over time did not. Stability across application sites was observed for the CPT and Switcher task, but practice effects were observed for all tasks except the PVT. We substantiate convergent and discriminant validity for several task scores using between-task correlations and provide further evidence for construct validity via associations of task scores with attentional and motivational assessments. Taken together, our results provide necessary information to help design and interpret studies involving attention assessments.
format Online
Article
Text
id pubmed-9882756
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-98827562023-01-28 Evaluation of the reliability and validity of computerized tests of attention Langner, Robert Scharnowski, Frank Ionta, Silvio G. Salmon, Carlos E. Piper, Brian J. Pamplona, Gustavo S. P. PLoS One Research Article Different aspects of attention can be assessed through psychological tests to identify stable individual or group differences as well as alterations after interventions. Aiming for a wide applicability of attentional assessments, Psychology Experiment Building Language (PEBL) is an open-source software system for designing and running computerized tasks that tax various attentional functions. Here, we evaluated the reliability and validity of computerized attention tasks as provided with the PEBL package: Continuous Performance Task (CPT), Switcher task, Psychomotor Vigilance Task (PVT), Mental Rotation task, and Attentional Network Test. For all tasks, we evaluated test-retest reliability using the intraclass correlation coefficient (ICC), as well as internal consistency through within-test correlations and split-half ICC. Across tasks, response time scores showed adequate reliability, whereas scores of performance accuracy, variability, and deterioration over time did not. Stability across application sites was observed for the CPT and Switcher task, but practice effects were observed for all tasks except the PVT. We substantiate convergent and discriminant validity for several task scores using between-task correlations and provide further evidence for construct validity via associations of task scores with attentional and motivational assessments. Taken together, our results provide necessary information to help design and interpret studies involving attention assessments. Public Library of Science 2023-01-27 /pmc/articles/PMC9882756/ /pubmed/36706136 http://dx.doi.org/10.1371/journal.pone.0281196 Text en https://creativecommons.org/publicdomain/zero/1.0/This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 (https://creativecommons.org/publicdomain/zero/1.0/) public domain dedication.
spellingShingle Research Article
Langner, Robert
Scharnowski, Frank
Ionta, Silvio
G. Salmon, Carlos E.
Piper, Brian J.
Pamplona, Gustavo S. P.
Evaluation of the reliability and validity of computerized tests of attention
title Evaluation of the reliability and validity of computerized tests of attention
title_full Evaluation of the reliability and validity of computerized tests of attention
title_fullStr Evaluation of the reliability and validity of computerized tests of attention
title_full_unstemmed Evaluation of the reliability and validity of computerized tests of attention
title_short Evaluation of the reliability and validity of computerized tests of attention
title_sort evaluation of the reliability and validity of computerized tests of attention
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9882756/
https://www.ncbi.nlm.nih.gov/pubmed/36706136
http://dx.doi.org/10.1371/journal.pone.0281196
work_keys_str_mv AT langnerrobert evaluationofthereliabilityandvalidityofcomputerizedtestsofattention
AT scharnowskifrank evaluationofthereliabilityandvalidityofcomputerizedtestsofattention
AT iontasilvio evaluationofthereliabilityandvalidityofcomputerizedtestsofattention
AT gsalmoncarlose evaluationofthereliabilityandvalidityofcomputerizedtestsofattention
AT piperbrianj evaluationofthereliabilityandvalidityofcomputerizedtestsofattention
AT pamplonagustavosp evaluationofthereliabilityandvalidityofcomputerizedtestsofattention