Cargando…

Crowdsourcing as a Novel Technique for Retinal Fundus Photography Classification: Analysis of Images in the EPIC Norfolk Cohort on Behalf of the UKBiobank Eye and Vision Consortium

AIM: Crowdsourcing is the process of outsourcing numerous tasks to many untrained individuals. Our aim was to assess the performance and repeatability of crowdsourcing for the classification of retinal fundus photography. METHODS: One hundred retinal fundus photograph images with pre-determined dise...

Descripción completa

Detalles Bibliográficos
Autores principales: Mitry, Danny, Peto, Tunde, Hayat, Shabina, Morgan, James E., Khaw, Kay-Tee, Foster, Paul J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2013
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3749186/
https://www.ncbi.nlm.nih.gov/pubmed/23990935
http://dx.doi.org/10.1371/journal.pone.0071154
_version_ 1782281164034146304
author Mitry, Danny
Peto, Tunde
Hayat, Shabina
Morgan, James E.
Khaw, Kay-Tee
Foster, Paul J.
author_facet Mitry, Danny
Peto, Tunde
Hayat, Shabina
Morgan, James E.
Khaw, Kay-Tee
Foster, Paul J.
author_sort Mitry, Danny
collection PubMed
description AIM: Crowdsourcing is the process of outsourcing numerous tasks to many untrained individuals. Our aim was to assess the performance and repeatability of crowdsourcing for the classification of retinal fundus photography. METHODS: One hundred retinal fundus photograph images with pre-determined disease criteria were selected by experts from a large cohort study. After reading brief instructions and an example classification, we requested that knowledge workers (KWs) from a crowdsourcing platform classified each image as normal or abnormal with grades of severity. Each image was classified 20 times by different KWs. Four study designs were examined to assess the effect of varying incentive and KW experience in classification accuracy. All study designs were conducted twice to examine repeatability. Performance was assessed by comparing the sensitivity, specificity and area under the receiver operating characteristic curve (AUC). RESULTS: Without restriction on eligible participants, two thousand classifications of 100 images were received in under 24 hours at minimal cost. In trial 1 all study designs had an AUC (95%CI) of 0.701(0.680–0.721) or greater for classification of normal/abnormal. In trial 1, the highest AUC (95%CI) for normal/abnormal classification was 0.757 (0.738–0.776) for KWs with moderate experience. Comparable results were observed in trial 2. In trial 1, between 64–86% of any abnormal image was correctly classified by over half of all KWs. In trial 2, this ranged between 74–97%. Sensitivity was ≥96% for normal versus severely abnormal detections across all trials. Sensitivity for normal versus mildly abnormal varied between 61–79% across trials. CONCLUSIONS: With minimal training, crowdsourcing represents an accurate, rapid and cost-effective method of retinal image analysis which demonstrates good repeatability. Larger studies with more comprehensive participant training are needed to explore the utility of this compelling technique in large scale medical image analysis.
format Online
Article
Text
id pubmed-3749186
institution National Center for Biotechnology Information
language English
publishDate 2013
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-37491862013-08-29 Crowdsourcing as a Novel Technique for Retinal Fundus Photography Classification: Analysis of Images in the EPIC Norfolk Cohort on Behalf of the UKBiobank Eye and Vision Consortium Mitry, Danny Peto, Tunde Hayat, Shabina Morgan, James E. Khaw, Kay-Tee Foster, Paul J. PLoS One Research Article AIM: Crowdsourcing is the process of outsourcing numerous tasks to many untrained individuals. Our aim was to assess the performance and repeatability of crowdsourcing for the classification of retinal fundus photography. METHODS: One hundred retinal fundus photograph images with pre-determined disease criteria were selected by experts from a large cohort study. After reading brief instructions and an example classification, we requested that knowledge workers (KWs) from a crowdsourcing platform classified each image as normal or abnormal with grades of severity. Each image was classified 20 times by different KWs. Four study designs were examined to assess the effect of varying incentive and KW experience in classification accuracy. All study designs were conducted twice to examine repeatability. Performance was assessed by comparing the sensitivity, specificity and area under the receiver operating characteristic curve (AUC). RESULTS: Without restriction on eligible participants, two thousand classifications of 100 images were received in under 24 hours at minimal cost. In trial 1 all study designs had an AUC (95%CI) of 0.701(0.680–0.721) or greater for classification of normal/abnormal. In trial 1, the highest AUC (95%CI) for normal/abnormal classification was 0.757 (0.738–0.776) for KWs with moderate experience. Comparable results were observed in trial 2. In trial 1, between 64–86% of any abnormal image was correctly classified by over half of all KWs. In trial 2, this ranged between 74–97%. Sensitivity was ≥96% for normal versus severely abnormal detections across all trials. Sensitivity for normal versus mildly abnormal varied between 61–79% across trials. CONCLUSIONS: With minimal training, crowdsourcing represents an accurate, rapid and cost-effective method of retinal image analysis which demonstrates good repeatability. Larger studies with more comprehensive participant training are needed to explore the utility of this compelling technique in large scale medical image analysis. Public Library of Science 2013-08-21 /pmc/articles/PMC3749186/ /pubmed/23990935 http://dx.doi.org/10.1371/journal.pone.0071154 Text en © 2013 Mitry et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
Mitry, Danny
Peto, Tunde
Hayat, Shabina
Morgan, James E.
Khaw, Kay-Tee
Foster, Paul J.
Crowdsourcing as a Novel Technique for Retinal Fundus Photography Classification: Analysis of Images in the EPIC Norfolk Cohort on Behalf of the UKBiobank Eye and Vision Consortium
title Crowdsourcing as a Novel Technique for Retinal Fundus Photography Classification: Analysis of Images in the EPIC Norfolk Cohort on Behalf of the UKBiobank Eye and Vision Consortium
title_full Crowdsourcing as a Novel Technique for Retinal Fundus Photography Classification: Analysis of Images in the EPIC Norfolk Cohort on Behalf of the UKBiobank Eye and Vision Consortium
title_fullStr Crowdsourcing as a Novel Technique for Retinal Fundus Photography Classification: Analysis of Images in the EPIC Norfolk Cohort on Behalf of the UKBiobank Eye and Vision Consortium
title_full_unstemmed Crowdsourcing as a Novel Technique for Retinal Fundus Photography Classification: Analysis of Images in the EPIC Norfolk Cohort on Behalf of the UKBiobank Eye and Vision Consortium
title_short Crowdsourcing as a Novel Technique for Retinal Fundus Photography Classification: Analysis of Images in the EPIC Norfolk Cohort on Behalf of the UKBiobank Eye and Vision Consortium
title_sort crowdsourcing as a novel technique for retinal fundus photography classification: analysis of images in the epic norfolk cohort on behalf of the ukbiobank eye and vision consortium
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3749186/
https://www.ncbi.nlm.nih.gov/pubmed/23990935
http://dx.doi.org/10.1371/journal.pone.0071154
work_keys_str_mv AT mitrydanny crowdsourcingasanoveltechniqueforretinalfundusphotographyclassificationanalysisofimagesintheepicnorfolkcohortonbehalfoftheukbiobankeyeandvisionconsortium
AT petotunde crowdsourcingasanoveltechniqueforretinalfundusphotographyclassificationanalysisofimagesintheepicnorfolkcohortonbehalfoftheukbiobankeyeandvisionconsortium
AT hayatshabina crowdsourcingasanoveltechniqueforretinalfundusphotographyclassificationanalysisofimagesintheepicnorfolkcohortonbehalfoftheukbiobankeyeandvisionconsortium
AT morganjamese crowdsourcingasanoveltechniqueforretinalfundusphotographyclassificationanalysisofimagesintheepicnorfolkcohortonbehalfoftheukbiobankeyeandvisionconsortium
AT khawkaytee crowdsourcingasanoveltechniqueforretinalfundusphotographyclassificationanalysisofimagesintheepicnorfolkcohortonbehalfoftheukbiobankeyeandvisionconsortium
AT fosterpaulj crowdsourcingasanoveltechniqueforretinalfundusphotographyclassificationanalysisofimagesintheepicnorfolkcohortonbehalfoftheukbiobankeyeandvisionconsortium