Cargando…
Assessing Pictograph Recognition: A Comparison of Crowdsourcing and Traditional Survey Approaches
BACKGROUND: Compared to traditional methods of participant recruitment, online crowdsourcing platforms provide a fast and low-cost alternative. Amazon Mechanical Turk (MTurk) is a large and well-known crowdsourcing service. It has developed into the leading platform for crowdsourcing recruitment. OB...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
JMIR Publications Inc.
2015
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4704927/ https://www.ncbi.nlm.nih.gov/pubmed/26678085 http://dx.doi.org/10.2196/jmir.4582 |
_version_ | 1782408936002945024 |
---|---|
author | Kuang, Jinqiu Argo, Lauren Stoddard, Greg Bray, Bruce E Zeng-Treitler, Qing |
author_facet | Kuang, Jinqiu Argo, Lauren Stoddard, Greg Bray, Bruce E Zeng-Treitler, Qing |
author_sort | Kuang, Jinqiu |
collection | PubMed |
description | BACKGROUND: Compared to traditional methods of participant recruitment, online crowdsourcing platforms provide a fast and low-cost alternative. Amazon Mechanical Turk (MTurk) is a large and well-known crowdsourcing service. It has developed into the leading platform for crowdsourcing recruitment. OBJECTIVE: To explore the application of online crowdsourcing for health informatics research, specifically the testing of medical pictographs. METHODS: A set of pictographs created for cardiovascular hospital discharge instructions was tested for recognition. This set of illustrations (n=486) was first tested through an in-person survey in a hospital setting (n=150) and then using online MTurk participants (n=150). We analyzed these survey results to determine their comparability. RESULTS: Both the demographics and the pictograph recognition rates of online participants were different from those of the in-person participants. In the multivariable linear regression model comparing the 2 groups, the MTurk group scored significantly higher than the hospital sample after adjusting for potential demographic characteristics (adjusted mean difference 0.18, 95% CI 0.08-0.28, P<.001). The adjusted mean ratings were 2.95 (95% CI 2.89-3.02) for the in-person hospital sample and 3.14 (95% CI 3.07-3.20) for the online MTurk sample on a 4-point Likert scale (1=totally incorrect, 4=totally correct). CONCLUSIONS: The findings suggest that crowdsourcing is a viable complement to traditional in-person surveys, but it cannot replace them. |
format | Online Article Text |
id | pubmed-4704927 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2015 |
publisher | JMIR Publications Inc. |
record_format | MEDLINE/PubMed |
spelling | pubmed-47049272016-01-12 Assessing Pictograph Recognition: A Comparison of Crowdsourcing and Traditional Survey Approaches Kuang, Jinqiu Argo, Lauren Stoddard, Greg Bray, Bruce E Zeng-Treitler, Qing J Med Internet Res Original Paper BACKGROUND: Compared to traditional methods of participant recruitment, online crowdsourcing platforms provide a fast and low-cost alternative. Amazon Mechanical Turk (MTurk) is a large and well-known crowdsourcing service. It has developed into the leading platform for crowdsourcing recruitment. OBJECTIVE: To explore the application of online crowdsourcing for health informatics research, specifically the testing of medical pictographs. METHODS: A set of pictographs created for cardiovascular hospital discharge instructions was tested for recognition. This set of illustrations (n=486) was first tested through an in-person survey in a hospital setting (n=150) and then using online MTurk participants (n=150). We analyzed these survey results to determine their comparability. RESULTS: Both the demographics and the pictograph recognition rates of online participants were different from those of the in-person participants. In the multivariable linear regression model comparing the 2 groups, the MTurk group scored significantly higher than the hospital sample after adjusting for potential demographic characteristics (adjusted mean difference 0.18, 95% CI 0.08-0.28, P<.001). The adjusted mean ratings were 2.95 (95% CI 2.89-3.02) for the in-person hospital sample and 3.14 (95% CI 3.07-3.20) for the online MTurk sample on a 4-point Likert scale (1=totally incorrect, 4=totally correct). CONCLUSIONS: The findings suggest that crowdsourcing is a viable complement to traditional in-person surveys, but it cannot replace them. JMIR Publications Inc. 2015-12-17 /pmc/articles/PMC4704927/ /pubmed/26678085 http://dx.doi.org/10.2196/jmir.4582 Text en ©Jinqiu Kuang, Lauren Argo, Greg Stoddard, Bruce E Bray, Qing Zeng-Treitler. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 17.12.2015. https://creativecommons.org/licenses/by/2.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/ (https://creativecommons.org/licenses/by/2.0/) ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included. |
spellingShingle | Original Paper Kuang, Jinqiu Argo, Lauren Stoddard, Greg Bray, Bruce E Zeng-Treitler, Qing Assessing Pictograph Recognition: A Comparison of Crowdsourcing and Traditional Survey Approaches |
title | Assessing Pictograph Recognition: A Comparison of Crowdsourcing and Traditional Survey Approaches |
title_full | Assessing Pictograph Recognition: A Comparison of Crowdsourcing and Traditional Survey Approaches |
title_fullStr | Assessing Pictograph Recognition: A Comparison of Crowdsourcing and Traditional Survey Approaches |
title_full_unstemmed | Assessing Pictograph Recognition: A Comparison of Crowdsourcing and Traditional Survey Approaches |
title_short | Assessing Pictograph Recognition: A Comparison of Crowdsourcing and Traditional Survey Approaches |
title_sort | assessing pictograph recognition: a comparison of crowdsourcing and traditional survey approaches |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4704927/ https://www.ncbi.nlm.nih.gov/pubmed/26678085 http://dx.doi.org/10.2196/jmir.4582 |
work_keys_str_mv | AT kuangjinqiu assessingpictographrecognitionacomparisonofcrowdsourcingandtraditionalsurveyapproaches AT argolauren assessingpictographrecognitionacomparisonofcrowdsourcingandtraditionalsurveyapproaches AT stoddardgreg assessingpictographrecognitionacomparisonofcrowdsourcingandtraditionalsurveyapproaches AT braybrucee assessingpictographrecognitionacomparisonofcrowdsourcingandtraditionalsurveyapproaches AT zengtreitlerqing assessingpictographrecognitionacomparisonofcrowdsourcingandtraditionalsurveyapproaches |