Cargando…

A new approach to grant review assessments: score, then rank

BACKGROUND: In many grant review settings, proposals are selected for funding on the basis of summary statistics of review ratings. Challenges of this approach (including the presence of ties and unclear ordering of funding preference for proposals) could be mitigated if rankings such as top-k prefe...

Descripción completa

Detalles Bibliográficos
Autores principales: Gallo, Stephen A., Pearce, Michael, Lee, Carole J., Erosheva, Elena A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10367367/
https://www.ncbi.nlm.nih.gov/pubmed/37488628
http://dx.doi.org/10.1186/s41073-023-00131-7
_version_ 1785077376374276096
author Gallo, Stephen A.
Pearce, Michael
Lee, Carole J.
Erosheva, Elena A.
author_facet Gallo, Stephen A.
Pearce, Michael
Lee, Carole J.
Erosheva, Elena A.
author_sort Gallo, Stephen A.
collection PubMed
description BACKGROUND: In many grant review settings, proposals are selected for funding on the basis of summary statistics of review ratings. Challenges of this approach (including the presence of ties and unclear ordering of funding preference for proposals) could be mitigated if rankings such as top-k preferences or paired comparisons, which are local evaluations that enforce ordering across proposals, were also collected and incorporated in the analysis of review ratings. However, analyzing ratings and rankings simultaneously has not been done until recently. This paper describes a practical method for integrating rankings and scores and demonstrates its usefulness for making funding decisions in real-world applications. METHODS: We first present the application of our existing joint model for rankings and ratings, the Mallows-Binomial, in obtaining an integrated score for each proposal and generating the induced preference ordering. We then apply this methodology to several theoretical “toy” examples of rating and ranking data, designed to demonstrate specific properties of the model. We then describe an innovative protocol for collecting rankings of the top-six proposals as an add-on to the typical peer review scoring procedures and provide a case study using actual peer review data to exemplify the output and how the model can appropriately resolve judges’ evaluations. RESULTS: For the theoretical examples, we show how the model can provide a preference order to equally rated proposals by incorporating rankings, to proposals using ratings and only partial rankings (and how they differ from a ratings-only approach) and to proposals where judges provide internally inconsistent ratings/rankings and outlier scoring. Finally, we discuss how, using real world panel data, this method can provide information about funding priority with a level of accuracy in a well-suited format for research funding decisions. CONCLUSIONS: A methodology is provided to collect and employ both rating and ranking data in peer review assessments of proposal submission quality, highlighting several advantages over methods relying on ratings alone. This method leverages information to most accurately distill reviewer opinion into a useful output to make an informed funding decision and is general enough to be applied to settings such as in the NIH panel review process.
format Online
Article
Text
id pubmed-10367367
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-103673672023-07-26 A new approach to grant review assessments: score, then rank Gallo, Stephen A. Pearce, Michael Lee, Carole J. Erosheva, Elena A. Res Integr Peer Rev Methodology BACKGROUND: In many grant review settings, proposals are selected for funding on the basis of summary statistics of review ratings. Challenges of this approach (including the presence of ties and unclear ordering of funding preference for proposals) could be mitigated if rankings such as top-k preferences or paired comparisons, which are local evaluations that enforce ordering across proposals, were also collected and incorporated in the analysis of review ratings. However, analyzing ratings and rankings simultaneously has not been done until recently. This paper describes a practical method for integrating rankings and scores and demonstrates its usefulness for making funding decisions in real-world applications. METHODS: We first present the application of our existing joint model for rankings and ratings, the Mallows-Binomial, in obtaining an integrated score for each proposal and generating the induced preference ordering. We then apply this methodology to several theoretical “toy” examples of rating and ranking data, designed to demonstrate specific properties of the model. We then describe an innovative protocol for collecting rankings of the top-six proposals as an add-on to the typical peer review scoring procedures and provide a case study using actual peer review data to exemplify the output and how the model can appropriately resolve judges’ evaluations. RESULTS: For the theoretical examples, we show how the model can provide a preference order to equally rated proposals by incorporating rankings, to proposals using ratings and only partial rankings (and how they differ from a ratings-only approach) and to proposals where judges provide internally inconsistent ratings/rankings and outlier scoring. Finally, we discuss how, using real world panel data, this method can provide information about funding priority with a level of accuracy in a well-suited format for research funding decisions. CONCLUSIONS: A methodology is provided to collect and employ both rating and ranking data in peer review assessments of proposal submission quality, highlighting several advantages over methods relying on ratings alone. This method leverages information to most accurately distill reviewer opinion into a useful output to make an informed funding decision and is general enough to be applied to settings such as in the NIH panel review process. BioMed Central 2023-07-24 /pmc/articles/PMC10367367/ /pubmed/37488628 http://dx.doi.org/10.1186/s41073-023-00131-7 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Methodology
Gallo, Stephen A.
Pearce, Michael
Lee, Carole J.
Erosheva, Elena A.
A new approach to grant review assessments: score, then rank
title A new approach to grant review assessments: score, then rank
title_full A new approach to grant review assessments: score, then rank
title_fullStr A new approach to grant review assessments: score, then rank
title_full_unstemmed A new approach to grant review assessments: score, then rank
title_short A new approach to grant review assessments: score, then rank
title_sort new approach to grant review assessments: score, then rank
topic Methodology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10367367/
https://www.ncbi.nlm.nih.gov/pubmed/37488628
http://dx.doi.org/10.1186/s41073-023-00131-7
work_keys_str_mv AT gallostephena anewapproachtograntreviewassessmentsscorethenrank
AT pearcemichael anewapproachtograntreviewassessmentsscorethenrank
AT leecarolej anewapproachtograntreviewassessmentsscorethenrank
AT eroshevaelenaa anewapproachtograntreviewassessmentsscorethenrank
AT gallostephena newapproachtograntreviewassessmentsscorethenrank
AT pearcemichael newapproachtograntreviewassessmentsscorethenrank
AT leecarolej newapproachtograntreviewassessmentsscorethenrank
AT eroshevaelenaa newapproachtograntreviewassessmentsscorethenrank