Cargando…

Precision Telemedicine through Crowdsourced Machine Learning: Testing Variability of Crowd Workers for Video-Based Autism Feature Recognition

Mobilized telemedicine is becoming a key, and even necessary, facet of both precision health and precision medicine. In this study, we evaluate the capability and potential of a crowd of virtual workers—defined as vetted members of popular crowdsourcing platforms—to aid in the task of diagnosing aut...

Descripción completa

Detalles Bibliográficos
Autores principales: Washington, Peter, Leblanc, Emilie, Dunlap, Kaitlyn, Penev, Yordan, Kline, Aaron, Paskov, Kelley, Sun, Min Woo, Chrisman, Brianna, Stockham, Nathaniel, Varma, Maya, Voss, Catalin, Haber, Nick, Wall, Dennis P.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7564950/
https://www.ncbi.nlm.nih.gov/pubmed/32823538
http://dx.doi.org/10.3390/jpm10030086
_version_ 1783595828900790272
author Washington, Peter
Leblanc, Emilie
Dunlap, Kaitlyn
Penev, Yordan
Kline, Aaron
Paskov, Kelley
Sun, Min Woo
Chrisman, Brianna
Stockham, Nathaniel
Varma, Maya
Voss, Catalin
Haber, Nick
Wall, Dennis P.
author_facet Washington, Peter
Leblanc, Emilie
Dunlap, Kaitlyn
Penev, Yordan
Kline, Aaron
Paskov, Kelley
Sun, Min Woo
Chrisman, Brianna
Stockham, Nathaniel
Varma, Maya
Voss, Catalin
Haber, Nick
Wall, Dennis P.
author_sort Washington, Peter
collection PubMed
description Mobilized telemedicine is becoming a key, and even necessary, facet of both precision health and precision medicine. In this study, we evaluate the capability and potential of a crowd of virtual workers—defined as vetted members of popular crowdsourcing platforms—to aid in the task of diagnosing autism. We evaluate workers when crowdsourcing the task of providing categorical ordinal behavioral ratings to unstructured public YouTube videos of children with autism and neurotypical controls. To evaluate emerging patterns that are consistent across independent crowds, we target workers from distinct geographic loci on two crowdsourcing platforms: an international group of workers on Amazon Mechanical Turk (MTurk) (N = 15) and Microworkers from Bangladesh (N = 56), Kenya (N = 23), and the Philippines (N = 25). We feed worker responses as input to a validated diagnostic machine learning classifier trained on clinician-filled electronic health records. We find that regardless of crowd platform or targeted country, workers vary in the average confidence of the correct diagnosis predicted by the classifier. The best worker responses produce a mean probability of the correct class above 80% and over one standard deviation above 50%, accuracy and variability on par with experts according to prior studies. There is a weak correlation between mean time spent on task and mean performance (r = 0.358, p = 0.005). These results demonstrate that while the crowd can produce accurate diagnoses, there are intrinsic differences in crowdworker ability to rate behavioral features. We propose a novel strategy for recruitment of crowdsourced workers to ensure high quality diagnostic evaluations of autism, and potentially many other pediatric behavioral health conditions. Our approach represents a viable step in the direction of crowd-based approaches for more scalable and affordable precision medicine.
format Online
Article
Text
id pubmed-7564950
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-75649502020-10-26 Precision Telemedicine through Crowdsourced Machine Learning: Testing Variability of Crowd Workers for Video-Based Autism Feature Recognition Washington, Peter Leblanc, Emilie Dunlap, Kaitlyn Penev, Yordan Kline, Aaron Paskov, Kelley Sun, Min Woo Chrisman, Brianna Stockham, Nathaniel Varma, Maya Voss, Catalin Haber, Nick Wall, Dennis P. J Pers Med Article Mobilized telemedicine is becoming a key, and even necessary, facet of both precision health and precision medicine. In this study, we evaluate the capability and potential of a crowd of virtual workers—defined as vetted members of popular crowdsourcing platforms—to aid in the task of diagnosing autism. We evaluate workers when crowdsourcing the task of providing categorical ordinal behavioral ratings to unstructured public YouTube videos of children with autism and neurotypical controls. To evaluate emerging patterns that are consistent across independent crowds, we target workers from distinct geographic loci on two crowdsourcing platforms: an international group of workers on Amazon Mechanical Turk (MTurk) (N = 15) and Microworkers from Bangladesh (N = 56), Kenya (N = 23), and the Philippines (N = 25). We feed worker responses as input to a validated diagnostic machine learning classifier trained on clinician-filled electronic health records. We find that regardless of crowd platform or targeted country, workers vary in the average confidence of the correct diagnosis predicted by the classifier. The best worker responses produce a mean probability of the correct class above 80% and over one standard deviation above 50%, accuracy and variability on par with experts according to prior studies. There is a weak correlation between mean time spent on task and mean performance (r = 0.358, p = 0.005). These results demonstrate that while the crowd can produce accurate diagnoses, there are intrinsic differences in crowdworker ability to rate behavioral features. We propose a novel strategy for recruitment of crowdsourced workers to ensure high quality diagnostic evaluations of autism, and potentially many other pediatric behavioral health conditions. Our approach represents a viable step in the direction of crowd-based approaches for more scalable and affordable precision medicine. MDPI 2020-08-13 /pmc/articles/PMC7564950/ /pubmed/32823538 http://dx.doi.org/10.3390/jpm10030086 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Washington, Peter
Leblanc, Emilie
Dunlap, Kaitlyn
Penev, Yordan
Kline, Aaron
Paskov, Kelley
Sun, Min Woo
Chrisman, Brianna
Stockham, Nathaniel
Varma, Maya
Voss, Catalin
Haber, Nick
Wall, Dennis P.
Precision Telemedicine through Crowdsourced Machine Learning: Testing Variability of Crowd Workers for Video-Based Autism Feature Recognition
title Precision Telemedicine through Crowdsourced Machine Learning: Testing Variability of Crowd Workers for Video-Based Autism Feature Recognition
title_full Precision Telemedicine through Crowdsourced Machine Learning: Testing Variability of Crowd Workers for Video-Based Autism Feature Recognition
title_fullStr Precision Telemedicine through Crowdsourced Machine Learning: Testing Variability of Crowd Workers for Video-Based Autism Feature Recognition
title_full_unstemmed Precision Telemedicine through Crowdsourced Machine Learning: Testing Variability of Crowd Workers for Video-Based Autism Feature Recognition
title_short Precision Telemedicine through Crowdsourced Machine Learning: Testing Variability of Crowd Workers for Video-Based Autism Feature Recognition
title_sort precision telemedicine through crowdsourced machine learning: testing variability of crowd workers for video-based autism feature recognition
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7564950/
https://www.ncbi.nlm.nih.gov/pubmed/32823538
http://dx.doi.org/10.3390/jpm10030086
work_keys_str_mv AT washingtonpeter precisiontelemedicinethroughcrowdsourcedmachinelearningtestingvariabilityofcrowdworkersforvideobasedautismfeaturerecognition
AT leblancemilie precisiontelemedicinethroughcrowdsourcedmachinelearningtestingvariabilityofcrowdworkersforvideobasedautismfeaturerecognition
AT dunlapkaitlyn precisiontelemedicinethroughcrowdsourcedmachinelearningtestingvariabilityofcrowdworkersforvideobasedautismfeaturerecognition
AT penevyordan precisiontelemedicinethroughcrowdsourcedmachinelearningtestingvariabilityofcrowdworkersforvideobasedautismfeaturerecognition
AT klineaaron precisiontelemedicinethroughcrowdsourcedmachinelearningtestingvariabilityofcrowdworkersforvideobasedautismfeaturerecognition
AT paskovkelley precisiontelemedicinethroughcrowdsourcedmachinelearningtestingvariabilityofcrowdworkersforvideobasedautismfeaturerecognition
AT sunminwoo precisiontelemedicinethroughcrowdsourcedmachinelearningtestingvariabilityofcrowdworkersforvideobasedautismfeaturerecognition
AT chrismanbrianna precisiontelemedicinethroughcrowdsourcedmachinelearningtestingvariabilityofcrowdworkersforvideobasedautismfeaturerecognition
AT stockhamnathaniel precisiontelemedicinethroughcrowdsourcedmachinelearningtestingvariabilityofcrowdworkersforvideobasedautismfeaturerecognition
AT varmamaya precisiontelemedicinethroughcrowdsourcedmachinelearningtestingvariabilityofcrowdworkersforvideobasedautismfeaturerecognition
AT vosscatalin precisiontelemedicinethroughcrowdsourcedmachinelearningtestingvariabilityofcrowdworkersforvideobasedautismfeaturerecognition
AT habernick precisiontelemedicinethroughcrowdsourcedmachinelearningtestingvariabilityofcrowdworkersforvideobasedautismfeaturerecognition
AT walldennisp precisiontelemedicinethroughcrowdsourcedmachinelearningtestingvariabilityofcrowdworkersforvideobasedautismfeaturerecognition