Cargando…

Assessing Pictograph Recognition: A Comparison of Crowdsourcing and Traditional Survey Approaches

BACKGROUND: Compared to traditional methods of participant recruitment, online crowdsourcing platforms provide a fast and low-cost alternative. Amazon Mechanical Turk (MTurk) is a large and well-known crowdsourcing service. It has developed into the leading platform for crowdsourcing recruitment. OB...

Descripción completa

Detalles Bibliográficos
Autores principales: Kuang, Jinqiu, Argo, Lauren, Stoddard, Greg, Bray, Bruce E, Zeng-Treitler, Qing
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications Inc. 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4704927/
https://www.ncbi.nlm.nih.gov/pubmed/26678085
http://dx.doi.org/10.2196/jmir.4582
Descripción
Sumario:BACKGROUND: Compared to traditional methods of participant recruitment, online crowdsourcing platforms provide a fast and low-cost alternative. Amazon Mechanical Turk (MTurk) is a large and well-known crowdsourcing service. It has developed into the leading platform for crowdsourcing recruitment. OBJECTIVE: To explore the application of online crowdsourcing for health informatics research, specifically the testing of medical pictographs. METHODS: A set of pictographs created for cardiovascular hospital discharge instructions was tested for recognition. This set of illustrations (n=486) was first tested through an in-person survey in a hospital setting (n=150) and then using online MTurk participants (n=150). We analyzed these survey results to determine their comparability. RESULTS: Both the demographics and the pictograph recognition rates of online participants were different from those of the in-person participants. In the multivariable linear regression model comparing the 2 groups, the MTurk group scored significantly higher than the hospital sample after adjusting for potential demographic characteristics (adjusted mean difference 0.18, 95% CI 0.08-0.28, P<.001). The adjusted mean ratings were 2.95 (95% CI 2.89-3.02) for the in-person hospital sample and 3.14 (95% CI 3.07-3.20) for the online MTurk sample on a 4-point Likert scale (1=totally incorrect, 4=totally correct). CONCLUSIONS: The findings suggest that crowdsourcing is a viable complement to traditional in-person surveys, but it cannot replace them.