Cargando…

Piloting an approach to rapid and automated assessment of a new research initiative: Application to the National Cancer Institute’s Provocative Questions initiative

Funders of biomedical research are often challenged to understand how a new funding initiative fits within the agency’s portfolio and the larger research community. While traditional assessment relies on retrospective review by subject matter experts, it is now feasible to design portfolio assessmen...

Descripción completa

Detalles Bibliográficos
Autores principales: Hsu, Elizabeth R., Williams, Duane E., DiJoseph, Leo G., Schnell, Joshua D., Finstad, Samantha L., Lee, Jerry S. H., Greenspan, Emily J., Corrigan, James G.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2013
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3814301/
https://www.ncbi.nlm.nih.gov/pubmed/24808631
http://dx.doi.org/10.1093/reseval/rvt024
Descripción
Sumario:Funders of biomedical research are often challenged to understand how a new funding initiative fits within the agency’s portfolio and the larger research community. While traditional assessment relies on retrospective review by subject matter experts, it is now feasible to design portfolio assessment and gap analysis tools leveraging administrative and grant application data that can be used for early and continued analysis. We piloted such methods on the National Cancer Institute’s Provocative Questions (PQ) initiative to address key questions regarding diversity of applicants; whether applicants were proposing new avenues of research; and whether grant applications were filling portfolio gaps. For the latter two questions, we defined measurements called focus shift and relevance, respectively, based on text similarity scoring. We demonstrate that two types of applicants were attracted by the PQs at rates greater than or on par with the general National Cancer Institute applicant pool: those with clinical degrees and new investigators. Focus shift scores tended to be relatively low, with applicants not straying far from previous research, but the majority of applications were found to be relevant to the PQ the application was addressing. Sensitivity to comparison text and inability to distinguish subtle scientific nuances are the primary limitations of our automated approaches based on text similarity, potentially biasing relevance and focus shift measurements. We also discuss potential uses of the relevance and focus shift measures including the design of outcome evaluations, though further experimentation and refinement are needed for a fuller understanding of these measures before broad application.