Cargando…

Examining the Predictive Validity of NIH Peer Review Scores

The predictive validity of peer review at the National Institutes of Health (NIH) has not yet been demonstrated empirically. It might be assumed that the most efficient and expedient test of the predictive validity of NIH peer review would be an examination of the correlation between percentile scor...

Descripción completa

Detalles Bibliográficos
Autores principales: Lindner, Mark D., Nakamura, Richard K.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4454673/
https://www.ncbi.nlm.nih.gov/pubmed/26039440
http://dx.doi.org/10.1371/journal.pone.0126938
_version_ 1782374633189670912
author Lindner, Mark D.
Nakamura, Richard K.
author_facet Lindner, Mark D.
Nakamura, Richard K.
author_sort Lindner, Mark D.
collection PubMed
description The predictive validity of peer review at the National Institutes of Health (NIH) has not yet been demonstrated empirically. It might be assumed that the most efficient and expedient test of the predictive validity of NIH peer review would be an examination of the correlation between percentile scores from peer review and bibliometric indices of the publications produced from funded projects. The present study used a large dataset to examine the rationale for such a study, to determine if it would satisfy the requirements for a test of predictive validity. The results show significant restriction of range in the applications selected for funding. Furthermore, those few applications that are funded with slightly worse peer review scores are not selected at random or representative of other applications in the same range. The funding institutes also negotiate with applicants to address issues identified during peer review. Therefore, the peer review scores assigned to the submitted applications, especially for those few funded applications with slightly worse peer review scores, do not reflect the changed and improved projects that are eventually funded. In addition, citation metrics by themselves are not valid or appropriate measures of scientific impact. The use of bibliometric indices on their own to measure scientific impact would likely increase the inefficiencies and problems with replicability already largely attributed to the current over-emphasis on bibliometric indices. Therefore, retrospective analyses of the correlation between percentile scores from peer review and bibliometric indices of the publications resulting from funded grant applications are not valid tests of the predictive validity of peer review at the NIH.
format Online
Article
Text
id pubmed-4454673
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-44546732015-06-09 Examining the Predictive Validity of NIH Peer Review Scores Lindner, Mark D. Nakamura, Richard K. PLoS One Research Article The predictive validity of peer review at the National Institutes of Health (NIH) has not yet been demonstrated empirically. It might be assumed that the most efficient and expedient test of the predictive validity of NIH peer review would be an examination of the correlation between percentile scores from peer review and bibliometric indices of the publications produced from funded projects. The present study used a large dataset to examine the rationale for such a study, to determine if it would satisfy the requirements for a test of predictive validity. The results show significant restriction of range in the applications selected for funding. Furthermore, those few applications that are funded with slightly worse peer review scores are not selected at random or representative of other applications in the same range. The funding institutes also negotiate with applicants to address issues identified during peer review. Therefore, the peer review scores assigned to the submitted applications, especially for those few funded applications with slightly worse peer review scores, do not reflect the changed and improved projects that are eventually funded. In addition, citation metrics by themselves are not valid or appropriate measures of scientific impact. The use of bibliometric indices on their own to measure scientific impact would likely increase the inefficiencies and problems with replicability already largely attributed to the current over-emphasis on bibliometric indices. Therefore, retrospective analyses of the correlation between percentile scores from peer review and bibliometric indices of the publications resulting from funded grant applications are not valid tests of the predictive validity of peer review at the NIH. Public Library of Science 2015-06-03 /pmc/articles/PMC4454673/ /pubmed/26039440 http://dx.doi.org/10.1371/journal.pone.0126938 Text en https://creativecommons.org/publicdomain/zero/1.0/ This is an open-access article distributed under the terms of the Creative Commons Public Domain declaration, which stipulates that, once placed in the public domain, this work may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose.
spellingShingle Research Article
Lindner, Mark D.
Nakamura, Richard K.
Examining the Predictive Validity of NIH Peer Review Scores
title Examining the Predictive Validity of NIH Peer Review Scores
title_full Examining the Predictive Validity of NIH Peer Review Scores
title_fullStr Examining the Predictive Validity of NIH Peer Review Scores
title_full_unstemmed Examining the Predictive Validity of NIH Peer Review Scores
title_short Examining the Predictive Validity of NIH Peer Review Scores
title_sort examining the predictive validity of nih peer review scores
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4454673/
https://www.ncbi.nlm.nih.gov/pubmed/26039440
http://dx.doi.org/10.1371/journal.pone.0126938
work_keys_str_mv AT lindnermarkd examiningthepredictivevalidityofnihpeerreviewscores
AT nakamurarichardk examiningthepredictivevalidityofnihpeerreviewscores