Cargando…

Man versus machine? Self-reports versus algorithmic measurement of publications

This paper uses newly available data from Web of Science on publications matched to researchers in Survey of Doctorate Recipients to compare the quality of scientific publication data collected by surveys versus algorithmic approaches. We illustrate the different types of measurement errors in self-...

Descripción completa

Detalles Bibliográficos
Autores principales: Jiang, Xuan, Chang, Wan-Ying, Weinberg, Bruce A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8480886/
https://www.ncbi.nlm.nih.gov/pubmed/34587169
http://dx.doi.org/10.1371/journal.pone.0257309
Descripción
Sumario:This paper uses newly available data from Web of Science on publications matched to researchers in Survey of Doctorate Recipients to compare the quality of scientific publication data collected by surveys versus algorithmic approaches. We illustrate the different types of measurement errors in self-reported and machine-generated data by estimating how publication measures from the two approaches are related to career outcomes (e.g., salaries and faculty rankings). We find that the potential biases in the self-reports are smaller relative to the algorithmic data. Moreover, the errors in the two approaches are quite intuitive: the measurement errors in algorithmic data are mainly due to the accuracy of matching, which primarily depends on the frequency of names and the data that was available to make matches, while the noise in self reports increases over the career as researchers’ publication records become more complex, harder to recall, and less immediately relevant for career progress. At a methodological level, we show how the approaches can be evaluated using accepted statistical methods without gold standard data. We also provide guidance on how to use the new linked data.