Cargando…
How well do critical care audit and feedback interventions adhere to best practice? Development and application of the REFLECT-52 evaluation tool
BACKGROUND: Healthcare Audit and Feedback (A&F) interventions have been shown to be an effective means of changing healthcare professional behavior, but work is required to optimize them, as evidence suggests that A&F interventions are not improving over time. Recent published guidance has s...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8369748/ https://www.ncbi.nlm.nih.gov/pubmed/34404449 http://dx.doi.org/10.1186/s13012-021-01145-9 |
_version_ | 1783739350940385280 |
---|---|
author | Foster, Madison Presseau, Justin Podolsky, Eyal McIntyre, Lauralyn Papoulias, Maria Brehaut, Jamie C. |
author_facet | Foster, Madison Presseau, Justin Podolsky, Eyal McIntyre, Lauralyn Papoulias, Maria Brehaut, Jamie C. |
author_sort | Foster, Madison |
collection | PubMed |
description | BACKGROUND: Healthcare Audit and Feedback (A&F) interventions have been shown to be an effective means of changing healthcare professional behavior, but work is required to optimize them, as evidence suggests that A&F interventions are not improving over time. Recent published guidance has suggested an initial set of best practices that may help to increase intervention effectiveness, which focus on the “Nature of the desired action,” “Nature of the data available for feedback,” “Feedback display,” and “Delivering the feedback intervention.” We aimed to develop a generalizable evaluation tool that can be used to assess whether A&F interventions conform to these suggestions for best practice and conducted initial testing of the tool through application to a sample of critical care A&F interventions. METHODS: We used a consensus-based approach to develop an evaluation tool from published guidance and subsequently applied the tool to conduct a secondary analysis of A&F interventions. To start, the 15 suggestions for improved feedback interventions published by Brehaut et al. were deconstructed into rateable items. Items were developed through iterative consensus meetings among researchers. These items were then piloted on 12 A&F studies (two reviewers met for consensus each time after independently applying the tool to four A&F intervention studies). After each consensus meeting, items were modified to improve clarity and specificity, and to help increase the reliability between coders. We then assessed the conformity to best practices of 17 critical care A&F interventions, sourced from a systematic review of A&F interventions on provider ordering of laboratory tests and transfusions in the critical care setting. Data for each criteria item was extracted by one coder and confirmed by a second; results were then aggregated and presented graphically or in a table and described narratively. RESULTS: In total, 52 criteria items were developed (38 ratable items and 14 descriptive items). Eight studies targeted lab test ordering behaviors, and 10 studies targeted blood transfusion ordering. Items focused on specifying the “Nature of the Desired Action” were adhered to most commonly—feedback was often presented in the context of an external priority (13/17), showed or described a discrepancy in performance (14/17), and in all cases it was reasonable for the recipients to be responsible for the change in behavior (17/17). Items focused on the “Nature of the Data Available for Feedback” were adhered to less often—only some interventions provided individual (5/17) or patient-level data (5/17), and few included aspirational comparators (2/17), or justifications for specificity of feedback (4/17), choice of comparator (0/9) or the interval between reports (3/13). Items focused on the “Nature of the Feedback Display” were reported poorly—just under half of interventions reported providing feedback in more than one way (8/17) and interventions rarely included pilot-testing of the feedback (1/17 unclear) or presentation of a visual display and summary message in close proximity of each other (1/13). Items focused on “Delivering the Feedback Intervention” were also poorly reported—feedback rarely reported use of barrier/enabler assessments (0/17), involved target members in the development of the feedback (0/17), or involved explicit design to be received and discussed in a social context (3/17); however, most interventions clearly indicated who was providing the feedback (11/17), involved a facilitator (8/12) or involved engaging in self-assessment around the target behavior prior to receipt of feedback (12/17). CONCLUSIONS: Many of the theory-informed best practice items were not consistently applied in critical care and can suggest clear ways to improve interventions. Standardized reporting of detailed intervention descriptions and feedback templates may also help to further advance research in this field. The 52-item tool can serve as a basis for reliably assessing concordance with best practice guidance in existing A&F interventions trialed in other healthcare settings, and could be used to inform future A&F intervention development. TRIAL REGISTRATION: Not applicable. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s13012-021-01145-9. |
format | Online Article Text |
id | pubmed-8369748 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-83697482021-08-18 How well do critical care audit and feedback interventions adhere to best practice? Development and application of the REFLECT-52 evaluation tool Foster, Madison Presseau, Justin Podolsky, Eyal McIntyre, Lauralyn Papoulias, Maria Brehaut, Jamie C. Implement Sci Research BACKGROUND: Healthcare Audit and Feedback (A&F) interventions have been shown to be an effective means of changing healthcare professional behavior, but work is required to optimize them, as evidence suggests that A&F interventions are not improving over time. Recent published guidance has suggested an initial set of best practices that may help to increase intervention effectiveness, which focus on the “Nature of the desired action,” “Nature of the data available for feedback,” “Feedback display,” and “Delivering the feedback intervention.” We aimed to develop a generalizable evaluation tool that can be used to assess whether A&F interventions conform to these suggestions for best practice and conducted initial testing of the tool through application to a sample of critical care A&F interventions. METHODS: We used a consensus-based approach to develop an evaluation tool from published guidance and subsequently applied the tool to conduct a secondary analysis of A&F interventions. To start, the 15 suggestions for improved feedback interventions published by Brehaut et al. were deconstructed into rateable items. Items were developed through iterative consensus meetings among researchers. These items were then piloted on 12 A&F studies (two reviewers met for consensus each time after independently applying the tool to four A&F intervention studies). After each consensus meeting, items were modified to improve clarity and specificity, and to help increase the reliability between coders. We then assessed the conformity to best practices of 17 critical care A&F interventions, sourced from a systematic review of A&F interventions on provider ordering of laboratory tests and transfusions in the critical care setting. Data for each criteria item was extracted by one coder and confirmed by a second; results were then aggregated and presented graphically or in a table and described narratively. RESULTS: In total, 52 criteria items were developed (38 ratable items and 14 descriptive items). Eight studies targeted lab test ordering behaviors, and 10 studies targeted blood transfusion ordering. Items focused on specifying the “Nature of the Desired Action” were adhered to most commonly—feedback was often presented in the context of an external priority (13/17), showed or described a discrepancy in performance (14/17), and in all cases it was reasonable for the recipients to be responsible for the change in behavior (17/17). Items focused on the “Nature of the Data Available for Feedback” were adhered to less often—only some interventions provided individual (5/17) or patient-level data (5/17), and few included aspirational comparators (2/17), or justifications for specificity of feedback (4/17), choice of comparator (0/9) or the interval between reports (3/13). Items focused on the “Nature of the Feedback Display” were reported poorly—just under half of interventions reported providing feedback in more than one way (8/17) and interventions rarely included pilot-testing of the feedback (1/17 unclear) or presentation of a visual display and summary message in close proximity of each other (1/13). Items focused on “Delivering the Feedback Intervention” were also poorly reported—feedback rarely reported use of barrier/enabler assessments (0/17), involved target members in the development of the feedback (0/17), or involved explicit design to be received and discussed in a social context (3/17); however, most interventions clearly indicated who was providing the feedback (11/17), involved a facilitator (8/12) or involved engaging in self-assessment around the target behavior prior to receipt of feedback (12/17). CONCLUSIONS: Many of the theory-informed best practice items were not consistently applied in critical care and can suggest clear ways to improve interventions. Standardized reporting of detailed intervention descriptions and feedback templates may also help to further advance research in this field. The 52-item tool can serve as a basis for reliably assessing concordance with best practice guidance in existing A&F interventions trialed in other healthcare settings, and could be used to inform future A&F intervention development. TRIAL REGISTRATION: Not applicable. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s13012-021-01145-9. BioMed Central 2021-08-17 /pmc/articles/PMC8369748/ /pubmed/34404449 http://dx.doi.org/10.1186/s13012-021-01145-9 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Foster, Madison Presseau, Justin Podolsky, Eyal McIntyre, Lauralyn Papoulias, Maria Brehaut, Jamie C. How well do critical care audit and feedback interventions adhere to best practice? Development and application of the REFLECT-52 evaluation tool |
title | How well do critical care audit and feedback interventions adhere to best practice? Development and application of the REFLECT-52 evaluation tool |
title_full | How well do critical care audit and feedback interventions adhere to best practice? Development and application of the REFLECT-52 evaluation tool |
title_fullStr | How well do critical care audit and feedback interventions adhere to best practice? Development and application of the REFLECT-52 evaluation tool |
title_full_unstemmed | How well do critical care audit and feedback interventions adhere to best practice? Development and application of the REFLECT-52 evaluation tool |
title_short | How well do critical care audit and feedback interventions adhere to best practice? Development and application of the REFLECT-52 evaluation tool |
title_sort | how well do critical care audit and feedback interventions adhere to best practice? development and application of the reflect-52 evaluation tool |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8369748/ https://www.ncbi.nlm.nih.gov/pubmed/34404449 http://dx.doi.org/10.1186/s13012-021-01145-9 |
work_keys_str_mv | AT fostermadison howwelldocriticalcareauditandfeedbackinterventionsadheretobestpracticedevelopmentandapplicationofthereflect52evaluationtool AT presseaujustin howwelldocriticalcareauditandfeedbackinterventionsadheretobestpracticedevelopmentandapplicationofthereflect52evaluationtool AT podolskyeyal howwelldocriticalcareauditandfeedbackinterventionsadheretobestpracticedevelopmentandapplicationofthereflect52evaluationtool AT mcintyrelauralyn howwelldocriticalcareauditandfeedbackinterventionsadheretobestpracticedevelopmentandapplicationofthereflect52evaluationtool AT papouliasmaria howwelldocriticalcareauditandfeedbackinterventionsadheretobestpracticedevelopmentandapplicationofthereflect52evaluationtool AT brehautjamiec howwelldocriticalcareauditandfeedbackinterventionsadheretobestpracticedevelopmentandapplicationofthereflect52evaluationtool |