Cargando…

Crowdsourcing Participatory Evaluation of Medical Pictograms Using Amazon Mechanical Turk

BACKGROUND: Consumer and patient participation proved to be an effective approach for medical pictogram design, but it can be costly and time-consuming. We proposed and evaluated an inexpensive approach that crowdsourced the pictogram evaluation task to Amazon Mechanical Turk (MTurk) workers, who ar...

Descripción completa

Detalles Bibliográficos
Autores principales: Yu, Bei, Willis, Matt, Sun, Peiyuan, Wang, Jun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications Inc. 2013
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3785992/
https://www.ncbi.nlm.nih.gov/pubmed/23732572
http://dx.doi.org/10.2196/jmir.2513
_version_ 1782477709979418624
author Yu, Bei
Willis, Matt
Sun, Peiyuan
Wang, Jun
author_facet Yu, Bei
Willis, Matt
Sun, Peiyuan
Wang, Jun
author_sort Yu, Bei
collection PubMed
description BACKGROUND: Consumer and patient participation proved to be an effective approach for medical pictogram design, but it can be costly and time-consuming. We proposed and evaluated an inexpensive approach that crowdsourced the pictogram evaluation task to Amazon Mechanical Turk (MTurk) workers, who are usually referred to as the “turkers”. OBJECTIVE: To answer two research questions: (1) Is the turkers’ collective effort effective for identifying design problems in medical pictograms? and (2) Do the turkers’ demographic characteristics affect their performance in medical pictogram comprehension? METHODS: We designed a Web-based survey (open-ended tests) to ask 100 US turkers to type in their guesses of the meaning of 20 US pharmacopeial pictograms. Two judges independently coded the turkers’ guesses into four categories: correct, partially correct, wrong, and completely wrong. The comprehensibility of a pictogram was measured by the percentage of correct guesses, with each partially correct guess counted as 0.5 correct. We then conducted a content analysis on the turkers’ interpretations to identify misunderstandings and assess whether the misunderstandings were common. We also conducted a statistical analysis to examine the relationship between turkers’ demographic characteristics and their pictogram comprehension performance. RESULTS: The survey was completed within 3 days of our posting the task to the MTurk, and the collected data are publicly available in the multimedia appendix for download. The comprehensibility for the 20 tested pictograms ranged from 45% to 98%, with an average of 72.5%. The comprehensibility scores of 10 pictograms were strongly correlated to the scores of the same pictograms reported in another study that used oral response–based open-ended testing with local people. The turkers’ misinterpretations shared common errors that exposed design problems in the pictograms. Participant performance was positively correlated with their educational level. CONCLUSIONS: The results confirmed that crowdsourcing can be used as an effective and inexpensive approach for participatory evaluation of medical pictograms. Through Web-based open-ended testing, the crowd can effectively identify problems in pictogram designs. The results also confirmed that education has a significant effect on the comprehension of medical pictograms. Since low-literate people are underrepresented in the turker population, further investigation is needed to examine to what extent turkers’ misunderstandings overlap with those elicited from low-literate people.
format Online
Article
Text
id pubmed-3785992
institution National Center for Biotechnology Information
language English
publishDate 2013
publisher JMIR Publications Inc.
record_format MEDLINE/PubMed
spelling pubmed-37859922013-10-17 Crowdsourcing Participatory Evaluation of Medical Pictograms Using Amazon Mechanical Turk Yu, Bei Willis, Matt Sun, Peiyuan Wang, Jun J Med Internet Res Original Paper BACKGROUND: Consumer and patient participation proved to be an effective approach for medical pictogram design, but it can be costly and time-consuming. We proposed and evaluated an inexpensive approach that crowdsourced the pictogram evaluation task to Amazon Mechanical Turk (MTurk) workers, who are usually referred to as the “turkers”. OBJECTIVE: To answer two research questions: (1) Is the turkers’ collective effort effective for identifying design problems in medical pictograms? and (2) Do the turkers’ demographic characteristics affect their performance in medical pictogram comprehension? METHODS: We designed a Web-based survey (open-ended tests) to ask 100 US turkers to type in their guesses of the meaning of 20 US pharmacopeial pictograms. Two judges independently coded the turkers’ guesses into four categories: correct, partially correct, wrong, and completely wrong. The comprehensibility of a pictogram was measured by the percentage of correct guesses, with each partially correct guess counted as 0.5 correct. We then conducted a content analysis on the turkers’ interpretations to identify misunderstandings and assess whether the misunderstandings were common. We also conducted a statistical analysis to examine the relationship between turkers’ demographic characteristics and their pictogram comprehension performance. RESULTS: The survey was completed within 3 days of our posting the task to the MTurk, and the collected data are publicly available in the multimedia appendix for download. The comprehensibility for the 20 tested pictograms ranged from 45% to 98%, with an average of 72.5%. The comprehensibility scores of 10 pictograms were strongly correlated to the scores of the same pictograms reported in another study that used oral response–based open-ended testing with local people. The turkers’ misinterpretations shared common errors that exposed design problems in the pictograms. Participant performance was positively correlated with their educational level. CONCLUSIONS: The results confirmed that crowdsourcing can be used as an effective and inexpensive approach for participatory evaluation of medical pictograms. Through Web-based open-ended testing, the crowd can effectively identify problems in pictogram designs. The results also confirmed that education has a significant effect on the comprehension of medical pictograms. Since low-literate people are underrepresented in the turker population, further investigation is needed to examine to what extent turkers’ misunderstandings overlap with those elicited from low-literate people. JMIR Publications Inc. 2013-06-03 /pmc/articles/PMC3785992/ /pubmed/23732572 http://dx.doi.org/10.2196/jmir.2513 Text en ©Bei Yu, Matt Willis, Peiyuan Sun, Jun Wang. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 03.06.2013. http://creativecommons.org/licenses/by/2.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Original Paper
Yu, Bei
Willis, Matt
Sun, Peiyuan
Wang, Jun
Crowdsourcing Participatory Evaluation of Medical Pictograms Using Amazon Mechanical Turk
title Crowdsourcing Participatory Evaluation of Medical Pictograms Using Amazon Mechanical Turk
title_full Crowdsourcing Participatory Evaluation of Medical Pictograms Using Amazon Mechanical Turk
title_fullStr Crowdsourcing Participatory Evaluation of Medical Pictograms Using Amazon Mechanical Turk
title_full_unstemmed Crowdsourcing Participatory Evaluation of Medical Pictograms Using Amazon Mechanical Turk
title_short Crowdsourcing Participatory Evaluation of Medical Pictograms Using Amazon Mechanical Turk
title_sort crowdsourcing participatory evaluation of medical pictograms using amazon mechanical turk
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3785992/
https://www.ncbi.nlm.nih.gov/pubmed/23732572
http://dx.doi.org/10.2196/jmir.2513
work_keys_str_mv AT yubei crowdsourcingparticipatoryevaluationofmedicalpictogramsusingamazonmechanicalturk
AT willismatt crowdsourcingparticipatoryevaluationofmedicalpictogramsusingamazonmechanicalturk
AT sunpeiyuan crowdsourcingparticipatoryevaluationofmedicalpictogramsusingamazonmechanicalturk
AT wangjun crowdsourcingparticipatoryevaluationofmedicalpictogramsusingamazonmechanicalturk