Cargando…
Creation of Reliable Relevance Judgments in Information Retrieval Systems Evaluation Experimentation through Crowdsourcing: A Review
Test collection is used to evaluate the information retrieval systems in laboratory-based evaluation experimentation. In a classic setting, generating relevance judgments involves human assessors and is a costly and time consuming task. Researchers and practitioners are still being challenged in per...
Autores principales: | Samimi, Parnia, Ravana, Sri Devi |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi Publishing Corporation
2014
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4055211/ https://www.ncbi.nlm.nih.gov/pubmed/24977172 http://dx.doi.org/10.1155/2014/135641 |
Ejemplares similares
-
Crowdsourcing Truthfulness: The Impact of Judgment Scale and Assessor Bias
por: La Barbera, David, et al.
Publicado: (2020) -
Crowdsourcing the creation of image segmentation algorithms for connectomics
por: Arganda-Carreras, Ignacio, et al.
Publicado: (2015) -
Design Judgments in the Creation of eLearning Modules
por: Farmer, Tadd, et al.
Publicado: (2022) -
Fighting misinformation on social media using crowdsourced judgments of news source quality
por: Pennycook, Gordon, et al.
Publicado: (2019) -
Retrieval Practice Facilitates Judgments of Learning Through Multiple Mechanisms: Simultaneous and Independent Contribution of Retrieval Confidence and Retrieval Fluency
por: Chen, Xi, et al.
Publicado: (2019)