Cargando…

RAMESES II reporting standards for realist evaluations

BACKGROUND: Realist evaluation is increasingly used in health services and other fields of research and evaluation. No previous standards exist for reporting realist evaluations. This standard was developed as part of the RAMESES II project. The project’s aim is to produce initial reporting standard...

Descripción completa

Detalles Bibliográficos
Autores principales: Wong, Geoff, Westhorp, Gill, Manzano, Ana, Greenhalgh, Joanne, Jagosh, Justin, Greenhalgh, Trish
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4920991/
https://www.ncbi.nlm.nih.gov/pubmed/27342217
http://dx.doi.org/10.1186/s12916-016-0643-1
_version_ 1782439458703933440
author Wong, Geoff
Westhorp, Gill
Manzano, Ana
Greenhalgh, Joanne
Jagosh, Justin
Greenhalgh, Trish
author_facet Wong, Geoff
Westhorp, Gill
Manzano, Ana
Greenhalgh, Joanne
Jagosh, Justin
Greenhalgh, Trish
author_sort Wong, Geoff
collection PubMed
description BACKGROUND: Realist evaluation is increasingly used in health services and other fields of research and evaluation. No previous standards exist for reporting realist evaluations. This standard was developed as part of the RAMESES II project. The project’s aim is to produce initial reporting standards for realist evaluations. METHODS: We purposively recruited a maximum variety sample of an international group of experts in realist evaluation to our online Delphi panel. Panel members came from a variety of disciplines, sectors and policy fields. We prepared the briefing materials for our Delphi panel by summarising the most recent literature on realist evaluations to identify how and why rigour had been demonstrated and where gaps in expertise and rigour were evident. We also drew on our collective experience as realist evaluators, in training and supporting realist evaluations, and on the RAMESES email list to help us develop the briefing materials. Through discussion within the project team, we developed a list of issues related to quality that needed to be addressed when carrying out realist evaluations. These were then shared with the panel members and their feedback was sought. Once the panel members had provided their feedback on our briefing materials, we constructed a set of items for potential inclusion in the reporting standards and circulated these online to panel members. Panel members were asked to rank each potential item twice on a 7-point Likert scale, once for relevance and once for validity. They were also encouraged to provide free text comments. RESULTS: We recruited 35 panel members from 27 organisations across six countries from nine different disciplines. Within three rounds our Delphi panel was able to reach consensus on 20 items that should be included in the reporting standards for realist evaluations. The overall response rates for all items for rounds 1, 2 and 3 were 94 %, 76 % and 80 %, respectively. CONCLUSION: These reporting standards for realist evaluations have been developed by drawing on a range of sources. We hope that these standards will lead to greater consistency and rigour of reporting and make realist evaluation reports more accessible, usable and helpful to different stakeholders.
format Online
Article
Text
id pubmed-4920991
institution National Center for Biotechnology Information
language English
publishDate 2016
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-49209912016-06-26 RAMESES II reporting standards for realist evaluations Wong, Geoff Westhorp, Gill Manzano, Ana Greenhalgh, Joanne Jagosh, Justin Greenhalgh, Trish BMC Med Guideline BACKGROUND: Realist evaluation is increasingly used in health services and other fields of research and evaluation. No previous standards exist for reporting realist evaluations. This standard was developed as part of the RAMESES II project. The project’s aim is to produce initial reporting standards for realist evaluations. METHODS: We purposively recruited a maximum variety sample of an international group of experts in realist evaluation to our online Delphi panel. Panel members came from a variety of disciplines, sectors and policy fields. We prepared the briefing materials for our Delphi panel by summarising the most recent literature on realist evaluations to identify how and why rigour had been demonstrated and where gaps in expertise and rigour were evident. We also drew on our collective experience as realist evaluators, in training and supporting realist evaluations, and on the RAMESES email list to help us develop the briefing materials. Through discussion within the project team, we developed a list of issues related to quality that needed to be addressed when carrying out realist evaluations. These were then shared with the panel members and their feedback was sought. Once the panel members had provided their feedback on our briefing materials, we constructed a set of items for potential inclusion in the reporting standards and circulated these online to panel members. Panel members were asked to rank each potential item twice on a 7-point Likert scale, once for relevance and once for validity. They were also encouraged to provide free text comments. RESULTS: We recruited 35 panel members from 27 organisations across six countries from nine different disciplines. Within three rounds our Delphi panel was able to reach consensus on 20 items that should be included in the reporting standards for realist evaluations. The overall response rates for all items for rounds 1, 2 and 3 were 94 %, 76 % and 80 %, respectively. CONCLUSION: These reporting standards for realist evaluations have been developed by drawing on a range of sources. We hope that these standards will lead to greater consistency and rigour of reporting and make realist evaluation reports more accessible, usable and helpful to different stakeholders. BioMed Central 2016-06-24 /pmc/articles/PMC4920991/ /pubmed/27342217 http://dx.doi.org/10.1186/s12916-016-0643-1 Text en © The Author(s). 2016 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
spellingShingle Guideline
Wong, Geoff
Westhorp, Gill
Manzano, Ana
Greenhalgh, Joanne
Jagosh, Justin
Greenhalgh, Trish
RAMESES II reporting standards for realist evaluations
title RAMESES II reporting standards for realist evaluations
title_full RAMESES II reporting standards for realist evaluations
title_fullStr RAMESES II reporting standards for realist evaluations
title_full_unstemmed RAMESES II reporting standards for realist evaluations
title_short RAMESES II reporting standards for realist evaluations
title_sort rameses ii reporting standards for realist evaluations
topic Guideline
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4920991/
https://www.ncbi.nlm.nih.gov/pubmed/27342217
http://dx.doi.org/10.1186/s12916-016-0643-1
work_keys_str_mv AT wonggeoff ramesesiireportingstandardsforrealistevaluations
AT westhorpgill ramesesiireportingstandardsforrealistevaluations
AT manzanoana ramesesiireportingstandardsforrealistevaluations
AT greenhalghjoanne ramesesiireportingstandardsforrealistevaluations
AT jagoshjustin ramesesiireportingstandardsforrealistevaluations
AT greenhalghtrish ramesesiireportingstandardsforrealistevaluations