Cargando…

Comparing the Quality of Crowdsourced Data Contributed by Expert and Non-Experts

There is currently a lack of in-situ environmental data for the calibration and validation of remotely sensed products and for the development and verification of models. Crowdsourcing is increasingly being seen as one potentially powerful way of increasing the supply of in-situ data but there are a...

Descripción completa

Detalles Bibliográficos
Autores principales: See, Linda, Comber, Alexis, Salk, Carl, Fritz, Steffen, van der Velde, Marijn, Perger, Christoph, Schill, Christian, McCallum, Ian, Kraxner, Florian, Obersteiner, Michael
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2013
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3729953/
https://www.ncbi.nlm.nih.gov/pubmed/23936126
http://dx.doi.org/10.1371/journal.pone.0069958
_version_ 1782279004291596288
author See, Linda
Comber, Alexis
Salk, Carl
Fritz, Steffen
van der Velde, Marijn
Perger, Christoph
Schill, Christian
McCallum, Ian
Kraxner, Florian
Obersteiner, Michael
author_facet See, Linda
Comber, Alexis
Salk, Carl
Fritz, Steffen
van der Velde, Marijn
Perger, Christoph
Schill, Christian
McCallum, Ian
Kraxner, Florian
Obersteiner, Michael
author_sort See, Linda
collection PubMed
description There is currently a lack of in-situ environmental data for the calibration and validation of remotely sensed products and for the development and verification of models. Crowdsourcing is increasingly being seen as one potentially powerful way of increasing the supply of in-situ data but there are a number of concerns over the subsequent use of the data, in particular over data quality. This paper examined crowdsourced data from the Geo-Wiki crowdsourcing tool for land cover validation to determine whether there were significant differences in quality between the answers provided by experts and non-experts in the domain of remote sensing and therefore the extent to which crowdsourced data describing human impact and land cover can be used in further scientific research. The results showed that there was little difference between experts and non-experts in identifying human impact although results varied by land cover while experts were better than non-experts in identifying the land cover type. This suggests the need to create training materials with more examples in those areas where difficulties in identification were encountered, and to offer some method for contributors to reflect on the information they contribute, perhaps by feeding back the evaluations of their contributed data or by making additional training materials available. Accuracies were also found to be higher when the volunteers were more consistent in their responses at a given location and when they indicated higher confidence, which suggests that these additional pieces of information could be used in the development of robust measures of quality in the future.
format Online
Article
Text
id pubmed-3729953
institution National Center for Biotechnology Information
language English
publishDate 2013
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-37299532013-08-09 Comparing the Quality of Crowdsourced Data Contributed by Expert and Non-Experts See, Linda Comber, Alexis Salk, Carl Fritz, Steffen van der Velde, Marijn Perger, Christoph Schill, Christian McCallum, Ian Kraxner, Florian Obersteiner, Michael PLoS One Research Article There is currently a lack of in-situ environmental data for the calibration and validation of remotely sensed products and for the development and verification of models. Crowdsourcing is increasingly being seen as one potentially powerful way of increasing the supply of in-situ data but there are a number of concerns over the subsequent use of the data, in particular over data quality. This paper examined crowdsourced data from the Geo-Wiki crowdsourcing tool for land cover validation to determine whether there were significant differences in quality between the answers provided by experts and non-experts in the domain of remote sensing and therefore the extent to which crowdsourced data describing human impact and land cover can be used in further scientific research. The results showed that there was little difference between experts and non-experts in identifying human impact although results varied by land cover while experts were better than non-experts in identifying the land cover type. This suggests the need to create training materials with more examples in those areas where difficulties in identification were encountered, and to offer some method for contributors to reflect on the information they contribute, perhaps by feeding back the evaluations of their contributed data or by making additional training materials available. Accuracies were also found to be higher when the volunteers were more consistent in their responses at a given location and when they indicated higher confidence, which suggests that these additional pieces of information could be used in the development of robust measures of quality in the future. Public Library of Science 2013-07-31 /pmc/articles/PMC3729953/ /pubmed/23936126 http://dx.doi.org/10.1371/journal.pone.0069958 Text en © 2013 See et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
See, Linda
Comber, Alexis
Salk, Carl
Fritz, Steffen
van der Velde, Marijn
Perger, Christoph
Schill, Christian
McCallum, Ian
Kraxner, Florian
Obersteiner, Michael
Comparing the Quality of Crowdsourced Data Contributed by Expert and Non-Experts
title Comparing the Quality of Crowdsourced Data Contributed by Expert and Non-Experts
title_full Comparing the Quality of Crowdsourced Data Contributed by Expert and Non-Experts
title_fullStr Comparing the Quality of Crowdsourced Data Contributed by Expert and Non-Experts
title_full_unstemmed Comparing the Quality of Crowdsourced Data Contributed by Expert and Non-Experts
title_short Comparing the Quality of Crowdsourced Data Contributed by Expert and Non-Experts
title_sort comparing the quality of crowdsourced data contributed by expert and non-experts
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3729953/
https://www.ncbi.nlm.nih.gov/pubmed/23936126
http://dx.doi.org/10.1371/journal.pone.0069958
work_keys_str_mv AT seelinda comparingthequalityofcrowdsourceddatacontributedbyexpertandnonexperts
AT comberalexis comparingthequalityofcrowdsourceddatacontributedbyexpertandnonexperts
AT salkcarl comparingthequalityofcrowdsourceddatacontributedbyexpertandnonexperts
AT fritzsteffen comparingthequalityofcrowdsourceddatacontributedbyexpertandnonexperts
AT vanderveldemarijn comparingthequalityofcrowdsourceddatacontributedbyexpertandnonexperts
AT pergerchristoph comparingthequalityofcrowdsourceddatacontributedbyexpertandnonexperts
AT schillchristian comparingthequalityofcrowdsourceddatacontributedbyexpertandnonexperts
AT mccallumian comparingthequalityofcrowdsourceddatacontributedbyexpertandnonexperts
AT kraxnerflorian comparingthequalityofcrowdsourceddatacontributedbyexpertandnonexperts
AT obersteinermichael comparingthequalityofcrowdsourceddatacontributedbyexpertandnonexperts