Cargando…

2286 A CTSA External Reviewer Exchange Consortium: Description and lessons learned

OBJECTIVES/SPECIFIC AIMS: To share the experience gained and lessons learned from a cross CTSA collaborative effort to improve the review process for Pilot Studies awards by exchanging external reviewers. METHODS/STUDY POPULATION: The CEREC process is managed by a web-based tracking system that enab...

Descripción completa

Detalles Bibliográficos
Autores principales: Schneider, Margaret, Mathew, Tanya, Gibson, Madeline, Zeller, Christine, Ranu, Hardeep, Davidson, Adam, Dillon, Pamela, Indelicato, Nia, Dinkjian, Aileen
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Cambridge University Press 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6799367/
http://dx.doi.org/10.1017/cts.2018.39
_version_ 1783460268824592384
author Schneider, Margaret
Mathew, Tanya
Gibson, Madeline
Zeller, Christine
Ranu, Hardeep
Davidson, Adam
Dillon, Pamela
Indelicato, Nia
Dinkjian, Aileen
author_facet Schneider, Margaret
Mathew, Tanya
Gibson, Madeline
Zeller, Christine
Ranu, Hardeep
Davidson, Adam
Dillon, Pamela
Indelicato, Nia
Dinkjian, Aileen
author_sort Schneider, Margaret
collection PubMed
description OBJECTIVES/SPECIFIC AIMS: To share the experience gained and lessons learned from a cross CTSA collaborative effort to improve the review process for Pilot Studies awards by exchanging external reviewers. METHODS/STUDY POPULATION: The CEREC process is managed by a web-based tracking system that enables all participating members to view at any time the status of reviewer invitations. This online tracking system is supplemented by monthly conference calls during which new calls for proposals are announced and best practices are identified. Each CTSA hub customized the CEREC model based on their individual pilot program needs and review process. Some hubs have supplemented their internal reviews by only posting proposals on CEREC that lack reviewers with significant expertise within their institutions. Other hubs have requested 1–3 external reviewers for each of their proposals or a selection of most promising proposals. In anticipation of potential scoring discrepancies, several hubs added a self-assessment of reviewer expertise and confidence at the end of each review. If a proposal is on the cusp of fundability, then the reviewers’ self-assessment may be taken into account. In addition to the tracking data collected by the online system, a survey of CEREC reviewers was conducted using Qualtrics. RESULTS/ANTICIPATED RESULTS: Across the 144 proposals submitted for reviews, CEREC members issued a total of 396 email invitations to potential reviewers. The number of invitations required to yield a reviewer ranged from 1 to 17. A total of 224 invitations were accepted, for a response rate of 56%. An external reviewer was unable to be located for 5 proposals (3%). Ultimately, 196 completed reviews were submitted, for a completion rate of 87%. The most common reasons for non-completion after acceptance of an invitation included reviewer illness and discovery of a conflict of interest. CEREC members found the process extremely useful for locating qualified reviewers who were not in conflict with the proposal being reviewed and for identifying reviewers for proposals related to highly specialized topics. The survey of CEREC reviewers found that they generally found the process easy to navigate and intellectually rewarding. Most would be willing to review additional CEREC proposals in the future. External reviewer comments and scores were generally in agreement with internal reviewer comments and scores. Thus, hubs could factor in external reviewer scores equally to internal reviewer scores, without feeling compelled to calibrate external reviewer scores. Overall, through CEREC external reviewers, mainly due to the stronger matching of scientific expertise and reduction of potential bias, the quality of reviews appear to be higher and more pertinent. DISCUSSION/SIGNIFICANCE OF IMPACT: Some aspects of the process emerged that will be addressed in the future to make the system more efficient. One issue that arose was the burden on the system during multiple simultaneous calls for proposals. Future plans call for harmonizing review cycles to avoid these overlaps. Efficiency also will be improved by optimizing the timing of reviewer invitations to minimize the probability of obtaining more reviews than requested. In addition to the original objective of CEREC, the collaboration has led to additional exchange of information regarding methods and processes related to running the Pilot Funding programs. For example, one site developed a method using REDCap to manage their reviewer database; an innovation that is being shared with the other CEREC partners. Another site has a well-developed process for integrating community reviewers into their review process and is sharing their training materials with the remaining CEREC partners.
format Online
Article
Text
id pubmed-6799367
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Cambridge University Press
record_format MEDLINE/PubMed
spelling pubmed-67993672019-10-28 2286 A CTSA External Reviewer Exchange Consortium: Description and lessons learned Schneider, Margaret Mathew, Tanya Gibson, Madeline Zeller, Christine Ranu, Hardeep Davidson, Adam Dillon, Pamela Indelicato, Nia Dinkjian, Aileen J Clin Transl Sci Basic/Translational Science/Team Science OBJECTIVES/SPECIFIC AIMS: To share the experience gained and lessons learned from a cross CTSA collaborative effort to improve the review process for Pilot Studies awards by exchanging external reviewers. METHODS/STUDY POPULATION: The CEREC process is managed by a web-based tracking system that enables all participating members to view at any time the status of reviewer invitations. This online tracking system is supplemented by monthly conference calls during which new calls for proposals are announced and best practices are identified. Each CTSA hub customized the CEREC model based on their individual pilot program needs and review process. Some hubs have supplemented their internal reviews by only posting proposals on CEREC that lack reviewers with significant expertise within their institutions. Other hubs have requested 1–3 external reviewers for each of their proposals or a selection of most promising proposals. In anticipation of potential scoring discrepancies, several hubs added a self-assessment of reviewer expertise and confidence at the end of each review. If a proposal is on the cusp of fundability, then the reviewers’ self-assessment may be taken into account. In addition to the tracking data collected by the online system, a survey of CEREC reviewers was conducted using Qualtrics. RESULTS/ANTICIPATED RESULTS: Across the 144 proposals submitted for reviews, CEREC members issued a total of 396 email invitations to potential reviewers. The number of invitations required to yield a reviewer ranged from 1 to 17. A total of 224 invitations were accepted, for a response rate of 56%. An external reviewer was unable to be located for 5 proposals (3%). Ultimately, 196 completed reviews were submitted, for a completion rate of 87%. The most common reasons for non-completion after acceptance of an invitation included reviewer illness and discovery of a conflict of interest. CEREC members found the process extremely useful for locating qualified reviewers who were not in conflict with the proposal being reviewed and for identifying reviewers for proposals related to highly specialized topics. The survey of CEREC reviewers found that they generally found the process easy to navigate and intellectually rewarding. Most would be willing to review additional CEREC proposals in the future. External reviewer comments and scores were generally in agreement with internal reviewer comments and scores. Thus, hubs could factor in external reviewer scores equally to internal reviewer scores, without feeling compelled to calibrate external reviewer scores. Overall, through CEREC external reviewers, mainly due to the stronger matching of scientific expertise and reduction of potential bias, the quality of reviews appear to be higher and more pertinent. DISCUSSION/SIGNIFICANCE OF IMPACT: Some aspects of the process emerged that will be addressed in the future to make the system more efficient. One issue that arose was the burden on the system during multiple simultaneous calls for proposals. Future plans call for harmonizing review cycles to avoid these overlaps. Efficiency also will be improved by optimizing the timing of reviewer invitations to minimize the probability of obtaining more reviews than requested. In addition to the original objective of CEREC, the collaboration has led to additional exchange of information regarding methods and processes related to running the Pilot Funding programs. For example, one site developed a method using REDCap to manage their reviewer database; an innovation that is being shared with the other CEREC partners. Another site has a well-developed process for integrating community reviewers into their review process and is sharing their training materials with the remaining CEREC partners. Cambridge University Press 2018-11-21 /pmc/articles/PMC6799367/ http://dx.doi.org/10.1017/cts.2018.39 Text en © The Association for Clinical and Translational Science 2018 http://creativecommons.org/licenses/by/4.0/ This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Basic/Translational Science/Team Science
Schneider, Margaret
Mathew, Tanya
Gibson, Madeline
Zeller, Christine
Ranu, Hardeep
Davidson, Adam
Dillon, Pamela
Indelicato, Nia
Dinkjian, Aileen
2286 A CTSA External Reviewer Exchange Consortium: Description and lessons learned
title 2286 A CTSA External Reviewer Exchange Consortium: Description and lessons learned
title_full 2286 A CTSA External Reviewer Exchange Consortium: Description and lessons learned
title_fullStr 2286 A CTSA External Reviewer Exchange Consortium: Description and lessons learned
title_full_unstemmed 2286 A CTSA External Reviewer Exchange Consortium: Description and lessons learned
title_short 2286 A CTSA External Reviewer Exchange Consortium: Description and lessons learned
title_sort 2286 a ctsa external reviewer exchange consortium: description and lessons learned
topic Basic/Translational Science/Team Science
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6799367/
http://dx.doi.org/10.1017/cts.2018.39
work_keys_str_mv AT schneidermargaret 2286actsaexternalreviewerexchangeconsortiumdescriptionandlessonslearned
AT mathewtanya 2286actsaexternalreviewerexchangeconsortiumdescriptionandlessonslearned
AT gibsonmadeline 2286actsaexternalreviewerexchangeconsortiumdescriptionandlessonslearned
AT zellerchristine 2286actsaexternalreviewerexchangeconsortiumdescriptionandlessonslearned
AT ranuhardeep 2286actsaexternalreviewerexchangeconsortiumdescriptionandlessonslearned
AT davidsonadam 2286actsaexternalreviewerexchangeconsortiumdescriptionandlessonslearned
AT dillonpamela 2286actsaexternalreviewerexchangeconsortiumdescriptionandlessonslearned
AT indelicatonia 2286actsaexternalreviewerexchangeconsortiumdescriptionandlessonslearned
AT dinkjianaileen 2286actsaexternalreviewerexchangeconsortiumdescriptionandlessonslearned