Cargando…

Data quality of platforms and panels for online behavioral research

We examine key aspects of data quality for online behavioral research between selected platforms (Amazon Mechanical Turk, CloudResearch, and Prolific) and panels (Qualtrics and Dynata). To identify the key aspects of data quality, we first engaged with the behavioral research community to discover w...

Descripción completa

Detalles Bibliográficos
Autores principales: Peer, Eyal, Rothschild, David, Gordon, Andrew, Evernden, Zak, Damer, Ekaterina
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8480459/
https://www.ncbi.nlm.nih.gov/pubmed/34590289
http://dx.doi.org/10.3758/s13428-021-01694-3
_version_ 1784576467500269568
author Peer, Eyal
Rothschild, David
Gordon, Andrew
Evernden, Zak
Damer, Ekaterina
author_facet Peer, Eyal
Rothschild, David
Gordon, Andrew
Evernden, Zak
Damer, Ekaterina
author_sort Peer, Eyal
collection PubMed
description We examine key aspects of data quality for online behavioral research between selected platforms (Amazon Mechanical Turk, CloudResearch, and Prolific) and panels (Qualtrics and Dynata). To identify the key aspects of data quality, we first engaged with the behavioral research community to discover which aspects are most critical to researchers and found that these include attention, comprehension, honesty, and reliability. We then explored differences in these data quality aspects in two studies (N ~ 4000), with or without data quality filters (approval ratings). We found considerable differences between the sites, especially in comprehension, attention, and dishonesty. In Study 1 (without filters), we found that only Prolific provided high data quality on all measures. In Study 2 (with filters), we found high data quality among CloudResearch and Prolific. MTurk showed alarmingly low data quality even with data quality filters. We also found that while reputation (approval rating) did not predict data quality, frequency and purpose of usage did, especially on MTurk: the lowest data quality came from MTurk participants who report using the site as their main source of income but spend few hours on it per week. We provide a framework for future investigation into the ever-changing nature of data quality in online research, and how the evolving set of platforms and panels performs on these key aspects.
format Online
Article
Text
id pubmed-8480459
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-84804592021-09-30 Data quality of platforms and panels for online behavioral research Peer, Eyal Rothschild, David Gordon, Andrew Evernden, Zak Damer, Ekaterina Behav Res Methods Article We examine key aspects of data quality for online behavioral research between selected platforms (Amazon Mechanical Turk, CloudResearch, and Prolific) and panels (Qualtrics and Dynata). To identify the key aspects of data quality, we first engaged with the behavioral research community to discover which aspects are most critical to researchers and found that these include attention, comprehension, honesty, and reliability. We then explored differences in these data quality aspects in two studies (N ~ 4000), with or without data quality filters (approval ratings). We found considerable differences between the sites, especially in comprehension, attention, and dishonesty. In Study 1 (without filters), we found that only Prolific provided high data quality on all measures. In Study 2 (with filters), we found high data quality among CloudResearch and Prolific. MTurk showed alarmingly low data quality even with data quality filters. We also found that while reputation (approval rating) did not predict data quality, frequency and purpose of usage did, especially on MTurk: the lowest data quality came from MTurk participants who report using the site as their main source of income but spend few hours on it per week. We provide a framework for future investigation into the ever-changing nature of data quality in online research, and how the evolving set of platforms and panels performs on these key aspects. Springer US 2021-09-29 2022 /pmc/articles/PMC8480459/ /pubmed/34590289 http://dx.doi.org/10.3758/s13428-021-01694-3 Text en © The Psychonomic Society, Inc. 2021 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Peer, Eyal
Rothschild, David
Gordon, Andrew
Evernden, Zak
Damer, Ekaterina
Data quality of platforms and panels for online behavioral research
title Data quality of platforms and panels for online behavioral research
title_full Data quality of platforms and panels for online behavioral research
title_fullStr Data quality of platforms and panels for online behavioral research
title_full_unstemmed Data quality of platforms and panels for online behavioral research
title_short Data quality of platforms and panels for online behavioral research
title_sort data quality of platforms and panels for online behavioral research
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8480459/
https://www.ncbi.nlm.nih.gov/pubmed/34590289
http://dx.doi.org/10.3758/s13428-021-01694-3
work_keys_str_mv AT peereyal dataqualityofplatformsandpanelsforonlinebehavioralresearch
AT rothschilddavid dataqualityofplatformsandpanelsforonlinebehavioralresearch
AT gordonandrew dataqualityofplatformsandpanelsforonlinebehavioralresearch
AT everndenzak dataqualityofplatformsandpanelsforonlinebehavioralresearch
AT damerekaterina dataqualityofplatformsandpanelsforonlinebehavioralresearch