Cargando…
An experimental characterization of workers’ behavior and accuracy in crowdsourced tasks
Crowdsourcing systems are evolving into a powerful tool of choice to deal with repetitive or lengthy human-based tasks. Prominent among those is Amazon Mechanical Turk, in which Human Intelligence Tasks, are posted by requesters, and afterwards selected and executed by subscribed (human) workers in...
Autores principales: | Christoforou, Evgenia, Fernández Anta, Antonio, Sánchez, Angel |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8208528/ https://www.ncbi.nlm.nih.gov/pubmed/34133447 http://dx.doi.org/10.1371/journal.pone.0252604 |
Ejemplares similares
-
Algorithmic Mechanisms for Reliable Crowdsourcing Computation under Collusion
por: Fernández Anta, Antonio, et al.
Publicado: (2015) -
A Mechanism for Fair Distribution of Resources without Payments
por: Christoforou, Evgenia, et al.
Publicado: (2016) -
Task Allocation Model Based on Worker Friend Relationship for Mobile Crowdsourcing
por: Zhao, Bingxu, et al.
Publicado: (2019) -
Crowdsourcing Dialect Characterization through Twitter
por: Gonçalves, Bruno, et al.
Publicado: (2014) -
Lessons Learned from Crowdsourcing Complex Engineering Tasks
por: Staffelbach, Matthew, et al.
Publicado: (2015)