Cargando…

Bibliometrics and research evaluation: uses and abuses

The research evaluation market is booming. "Ranking," "metrics," "h-index," and "impact factors" are reigning buzzwords. Government and research administrators want to evaluate everything -- teachers, professors, training programs, universities -- using quanti...

Descripción completa

Detalles Bibliográficos
Autor principal: Gingras, Yves
Lenguaje:eng
Publicado: MIT Press 2016
Materias:
Acceso en línea:http://cds.cern.ch/record/2217016
_version_ 1780952074772545536
author Gingras, Yves
author_facet Gingras, Yves
author_sort Gingras, Yves
collection CERN
description The research evaluation market is booming. "Ranking," "metrics," "h-index," and "impact factors" are reigning buzzwords. Government and research administrators want to evaluate everything -- teachers, professors, training programs, universities -- using quantitative indicators. Among the tools used to measure "research excellence," bibliometrics -- aggregate data on publications and citations -- has become dominant. Bibliometrics is hailed as an "objective" measure of research quality, a quantitative measure more useful than "subjective" and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.
id cern-2217016
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2016
publisher MIT Press
record_format invenio
spelling cern-22170162021-04-21T19:31:34Zhttp://cds.cern.ch/record/2217016engGingras, YvesBibliometrics and research evaluation: uses and abusesInformation Transfer and ManagementThe research evaluation market is booming. "Ranking," "metrics," "h-index," and "impact factors" are reigning buzzwords. Government and research administrators want to evaluate everything -- teachers, professors, training programs, universities -- using quantitative indicators. Among the tools used to measure "research excellence," bibliometrics -- aggregate data on publications and citations -- has become dominant. Bibliometrics is hailed as an "objective" measure of research quality, a quantitative measure more useful than "subjective" and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.MIT Pressoai:cds.cern.ch:22170162016
spellingShingle Information Transfer and Management
Gingras, Yves
Bibliometrics and research evaluation: uses and abuses
title Bibliometrics and research evaluation: uses and abuses
title_full Bibliometrics and research evaluation: uses and abuses
title_fullStr Bibliometrics and research evaluation: uses and abuses
title_full_unstemmed Bibliometrics and research evaluation: uses and abuses
title_short Bibliometrics and research evaluation: uses and abuses
title_sort bibliometrics and research evaluation: uses and abuses
topic Information Transfer and Management
url http://cds.cern.ch/record/2217016
work_keys_str_mv AT gingrasyves bibliometricsandresearchevaluationusesandabuses