Cargando…

The timing mega-study: comparing a range of experiment generators, both lab-based and online

Many researchers in the behavioral sciences depend on research software that presents stimuli, and records response times, with sub-millisecond precision. There are a large number of software packages with which to conduct these behavioral experiments and measure response times and performance of pa...

Descripción completa

Detalles Bibliográficos
Autores principales: Bridges, David, Pitiot, Alain, MacAskill, Michael R., Peirce, Jonathan W.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7512138/
https://www.ncbi.nlm.nih.gov/pubmed/33005482
http://dx.doi.org/10.7717/peerj.9414
_version_ 1783586094284013568
author Bridges, David
Pitiot, Alain
MacAskill, Michael R.
Peirce, Jonathan W.
author_facet Bridges, David
Pitiot, Alain
MacAskill, Michael R.
Peirce, Jonathan W.
author_sort Bridges, David
collection PubMed
description Many researchers in the behavioral sciences depend on research software that presents stimuli, and records response times, with sub-millisecond precision. There are a large number of software packages with which to conduct these behavioral experiments and measure response times and performance of participants. Very little information is available, however, on what timing performance they achieve in practice. Here we report a wide-ranging study looking at the precision and accuracy of visual and auditory stimulus timing and response times, measured with a Black Box Toolkit. We compared a range of popular packages: PsychoPy, E-Prime®, NBS Presentation®, Psychophysics Toolbox, OpenSesame, Expyriment, Gorilla, jsPsych, Lab.js and Testable. Where possible, the packages were tested on Windows, macOS, and Ubuntu, and in a range of browsers for the online studies, to try to identify common patterns in performance. Among the lab-based experiments, Psychtoolbox, PsychoPy, Presentation and E-Prime provided the best timing, all with mean precision under 1 millisecond across the visual, audio and response measures. OpenSesame had slightly less precision across the board, but most notably in audio stimuli and Expyriment had rather poor precision. Across operating systems, the pattern was that precision was generally very slightly better under Ubuntu than Windows, and that macOS was the worst, at least for visual stimuli, for all packages. Online studies did not deliver the same level of precision as lab-based systems, with slightly more variability in all measurements. That said, PsychoPy and Gorilla, broadly the best performers, were achieving very close to millisecond precision on several browser/operating system combinations. For response times (measured using a high-performance button box), most of the packages achieved precision at least under 10 ms in all browsers, with PsychoPy achieving a precision under 3.5 ms in all. There was considerable variability between OS/browser combinations, especially in audio-visual synchrony which is the least precise aspect of the browser-based experiments. Nonetheless, the data indicate that online methods can be suitable for a wide range of studies, with due thought about the sources of variability that result. The results, from over 110,000 trials, highlight the wide range of timing qualities that can occur even in these dedicated software packages for the task. We stress the importance of scientists making their own timing validation measurements for their own stimuli and computer configuration.
format Online
Article
Text
id pubmed-7512138
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-75121382020-09-30 The timing mega-study: comparing a range of experiment generators, both lab-based and online Bridges, David Pitiot, Alain MacAskill, Michael R. Peirce, Jonathan W. PeerJ Neuroscience Many researchers in the behavioral sciences depend on research software that presents stimuli, and records response times, with sub-millisecond precision. There are a large number of software packages with which to conduct these behavioral experiments and measure response times and performance of participants. Very little information is available, however, on what timing performance they achieve in practice. Here we report a wide-ranging study looking at the precision and accuracy of visual and auditory stimulus timing and response times, measured with a Black Box Toolkit. We compared a range of popular packages: PsychoPy, E-Prime®, NBS Presentation®, Psychophysics Toolbox, OpenSesame, Expyriment, Gorilla, jsPsych, Lab.js and Testable. Where possible, the packages were tested on Windows, macOS, and Ubuntu, and in a range of browsers for the online studies, to try to identify common patterns in performance. Among the lab-based experiments, Psychtoolbox, PsychoPy, Presentation and E-Prime provided the best timing, all with mean precision under 1 millisecond across the visual, audio and response measures. OpenSesame had slightly less precision across the board, but most notably in audio stimuli and Expyriment had rather poor precision. Across operating systems, the pattern was that precision was generally very slightly better under Ubuntu than Windows, and that macOS was the worst, at least for visual stimuli, for all packages. Online studies did not deliver the same level of precision as lab-based systems, with slightly more variability in all measurements. That said, PsychoPy and Gorilla, broadly the best performers, were achieving very close to millisecond precision on several browser/operating system combinations. For response times (measured using a high-performance button box), most of the packages achieved precision at least under 10 ms in all browsers, with PsychoPy achieving a precision under 3.5 ms in all. There was considerable variability between OS/browser combinations, especially in audio-visual synchrony which is the least precise aspect of the browser-based experiments. Nonetheless, the data indicate that online methods can be suitable for a wide range of studies, with due thought about the sources of variability that result. The results, from over 110,000 trials, highlight the wide range of timing qualities that can occur even in these dedicated software packages for the task. We stress the importance of scientists making their own timing validation measurements for their own stimuli and computer configuration. PeerJ Inc. 2020-07-20 /pmc/articles/PMC7512138/ /pubmed/33005482 http://dx.doi.org/10.7717/peerj.9414 Text en © 2020 Bridges et al. https://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ) and either DOI or URL of the article must be cited.
spellingShingle Neuroscience
Bridges, David
Pitiot, Alain
MacAskill, Michael R.
Peirce, Jonathan W.
The timing mega-study: comparing a range of experiment generators, both lab-based and online
title The timing mega-study: comparing a range of experiment generators, both lab-based and online
title_full The timing mega-study: comparing a range of experiment generators, both lab-based and online
title_fullStr The timing mega-study: comparing a range of experiment generators, both lab-based and online
title_full_unstemmed The timing mega-study: comparing a range of experiment generators, both lab-based and online
title_short The timing mega-study: comparing a range of experiment generators, both lab-based and online
title_sort timing mega-study: comparing a range of experiment generators, both lab-based and online
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7512138/
https://www.ncbi.nlm.nih.gov/pubmed/33005482
http://dx.doi.org/10.7717/peerj.9414
work_keys_str_mv AT bridgesdavid thetimingmegastudycomparingarangeofexperimentgeneratorsbothlabbasedandonline
AT pitiotalain thetimingmegastudycomparingarangeofexperimentgeneratorsbothlabbasedandonline
AT macaskillmichaelr thetimingmegastudycomparingarangeofexperimentgeneratorsbothlabbasedandonline
AT peircejonathanw thetimingmegastudycomparingarangeofexperimentgeneratorsbothlabbasedandonline
AT bridgesdavid timingmegastudycomparingarangeofexperimentgeneratorsbothlabbasedandonline
AT pitiotalain timingmegastudycomparingarangeofexperimentgeneratorsbothlabbasedandonline
AT macaskillmichaelr timingmegastudycomparingarangeofexperimentgeneratorsbothlabbasedandonline
AT peircejonathanw timingmegastudycomparingarangeofexperimentgeneratorsbothlabbasedandonline