Cargando…

An experimental characterization of workers’ behavior and accuracy in crowdsourced tasks

Crowdsourcing systems are evolving into a powerful tool of choice to deal with repetitive or lengthy human-based tasks. Prominent among those is Amazon Mechanical Turk, in which Human Intelligence Tasks, are posted by requesters, and afterwards selected and executed by subscribed (human) workers in...

Descripción completa

Detalles Bibliográficos
Autores principales: Christoforou, Evgenia, Fernández Anta, Antonio, Sánchez, Angel
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8208528/
https://www.ncbi.nlm.nih.gov/pubmed/34133447
http://dx.doi.org/10.1371/journal.pone.0252604
_version_ 1783708942227996672
author Christoforou, Evgenia
Fernández Anta, Antonio
Sánchez, Angel
author_facet Christoforou, Evgenia
Fernández Anta, Antonio
Sánchez, Angel
author_sort Christoforou, Evgenia
collection PubMed
description Crowdsourcing systems are evolving into a powerful tool of choice to deal with repetitive or lengthy human-based tasks. Prominent among those is Amazon Mechanical Turk, in which Human Intelligence Tasks, are posted by requesters, and afterwards selected and executed by subscribed (human) workers in the platform. Many times these HITs serve for research purposes. In this context, a very important question is how reliable the results obtained through these platforms are, in view of the limited control a requester has on the workers’ actions. Various control techniques are currently proposed but they are not free from shortcomings, and their use must be accompanied by a deeper understanding of the workers’ behavior. In this work, we attempt to interpret the workers’ behavior and reliability level in the absence of control techniques. To do so, we perform a series of experiments with 600 distinct MTurk workers, specifically designed to elicit the worker’s level of dedication to a task, according to the task’s nature and difficulty. We show that the time required by a worker to carry out a task correlates with its difficulty, and also with the quality of the outcome. We find that there are different types of workers. While some of them are willing to invest a significant amount of time to arrive at the correct answer, at the same time we observe a significant fraction of workers that reply with a wrong answer. For the latter, the difficulty of the task and the very short time they took to reply suggest that they, intentionally, did not even attempt to solve the task.
format Online
Article
Text
id pubmed-8208528
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-82085282021-06-29 An experimental characterization of workers’ behavior and accuracy in crowdsourced tasks Christoforou, Evgenia Fernández Anta, Antonio Sánchez, Angel PLoS One Research Article Crowdsourcing systems are evolving into a powerful tool of choice to deal with repetitive or lengthy human-based tasks. Prominent among those is Amazon Mechanical Turk, in which Human Intelligence Tasks, are posted by requesters, and afterwards selected and executed by subscribed (human) workers in the platform. Many times these HITs serve for research purposes. In this context, a very important question is how reliable the results obtained through these platforms are, in view of the limited control a requester has on the workers’ actions. Various control techniques are currently proposed but they are not free from shortcomings, and their use must be accompanied by a deeper understanding of the workers’ behavior. In this work, we attempt to interpret the workers’ behavior and reliability level in the absence of control techniques. To do so, we perform a series of experiments with 600 distinct MTurk workers, specifically designed to elicit the worker’s level of dedication to a task, according to the task’s nature and difficulty. We show that the time required by a worker to carry out a task correlates with its difficulty, and also with the quality of the outcome. We find that there are different types of workers. While some of them are willing to invest a significant amount of time to arrive at the correct answer, at the same time we observe a significant fraction of workers that reply with a wrong answer. For the latter, the difficulty of the task and the very short time they took to reply suggest that they, intentionally, did not even attempt to solve the task. Public Library of Science 2021-06-16 /pmc/articles/PMC8208528/ /pubmed/34133447 http://dx.doi.org/10.1371/journal.pone.0252604 Text en © 2021 Christoforou et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Christoforou, Evgenia
Fernández Anta, Antonio
Sánchez, Angel
An experimental characterization of workers’ behavior and accuracy in crowdsourced tasks
title An experimental characterization of workers’ behavior and accuracy in crowdsourced tasks
title_full An experimental characterization of workers’ behavior and accuracy in crowdsourced tasks
title_fullStr An experimental characterization of workers’ behavior and accuracy in crowdsourced tasks
title_full_unstemmed An experimental characterization of workers’ behavior and accuracy in crowdsourced tasks
title_short An experimental characterization of workers’ behavior and accuracy in crowdsourced tasks
title_sort experimental characterization of workers’ behavior and accuracy in crowdsourced tasks
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8208528/
https://www.ncbi.nlm.nih.gov/pubmed/34133447
http://dx.doi.org/10.1371/journal.pone.0252604
work_keys_str_mv AT christoforouevgenia anexperimentalcharacterizationofworkersbehaviorandaccuracyincrowdsourcedtasks
AT fernandezantaantonio anexperimentalcharacterizationofworkersbehaviorandaccuracyincrowdsourcedtasks
AT sanchezangel anexperimentalcharacterizationofworkersbehaviorandaccuracyincrowdsourcedtasks
AT christoforouevgenia experimentalcharacterizationofworkersbehaviorandaccuracyincrowdsourcedtasks
AT fernandezantaantonio experimentalcharacterizationofworkersbehaviorandaccuracyincrowdsourcedtasks
AT sanchezangel experimentalcharacterizationofworkersbehaviorandaccuracyincrowdsourcedtasks