Cargando…

A Crowdsourcing Framework for Medical Data Sets

Crowdsourcing services like Amazon Mechanical Turk allow researchers to ask questions to crowds of workers and quickly receive high quality labeled responses. However, crowds drawn from the general public are not suitable for labeling sensitive and complex data sets, such as medical records, due to...

Descripción completa

Detalles Bibliográficos
Autores principales: Ye, Cheng, Coco, Joseph, Epishova, Anna, Hajaj, Chen, Bogardus, Henry, Novak, Laurie, Denny, Joshua, Vorobeychik, Yevgeniy, Lasko, Thomas, Malin, Bradley, Fabbri, Daniel
Formato: Online Artículo Texto
Lenguaje:English
Publicado: American Medical Informatics Association 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5961774/
https://www.ncbi.nlm.nih.gov/pubmed/29888085
Descripción
Sumario:Crowdsourcing services like Amazon Mechanical Turk allow researchers to ask questions to crowds of workers and quickly receive high quality labeled responses. However, crowds drawn from the general public are not suitable for labeling sensitive and complex data sets, such as medical records, due to various concerns. Major challenges in building and deploying a crowdsourcing system for medical data include, but are not limited to: managing access rights to sensitive data and ensuring data privacy controls are enforced; identifying workers with the necessary expertise to analyze complex information; and efficiently retrieving relevant information in massive data sets. In this paper, we introduce a crowdsourcing framework to support the annotation of medical data sets. We further demonstrate a workflow for crowdsourcing clinical chart reviews including (1) the design and decomposition of research questions; (2) the architecture for storing and displaying sensitive data; and (3) the development of tools to support crowd workers in quickly analyzing information from complex data sets.