Cargando…

A Refined Teach-back Observation Tool: Validity Evidence in a Pediatric Setting

BACKGROUND: Teach Back (TB) is recommended to assess and ensure patient understanding, thereby promoting safety, quality, and equity. There are many TB trainings, typically lacking assessment tools with validity evidence. We used a pediatric resident competency-based communication curriculum to deve...

Descripción completa

Detalles Bibliográficos
Autores principales: Abrams, Mary Ann, Crichton, Kristin Garton, Oberle, Edward J., Flowers, Stacy, Crawford, Timothy N., Perry, Michael F., Mahan, John D., Reed, Suzanne
Formato: Online Artículo Texto
Lenguaje:English
Publicado: SLACK Incorporated 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10561624/
https://www.ncbi.nlm.nih.gov/pubmed/37812909
http://dx.doi.org/10.3928/24748307-20230919-01
Descripción
Sumario:BACKGROUND: Teach Back (TB) is recommended to assess and ensure patient understanding, thereby promoting safety, quality, and equity. There are many TB trainings, typically lacking assessment tools with validity evidence. We used a pediatric resident competency-based communication curriculum to develop initial validity evidence and refinement recommendations for a Teach-back Observation Tool (T-BOT). OBJECTIVE: This study aimed to develop initial validity evidence for a refined T-BOT and provide guidance for further enhancements to improve essential TB skills training among pediatric residents. METHODS: After an interactive health literacy (HL) training, residents participated in recorded standardized patient (SP) encounters. Raters developed T-BOT scoring criteria, then scored a gold standard TB video and resident SP encounters. For agreement, Fleiss' Kappa was computed for >2 raters, and Cohen's Kappa for two raters. Percent agreement and intraclass correlation (ICC) were calculated. Statistics were calculated for gold standard (GS) and TB items overall for all six raters, and for five faculty raters. Agreement was based on Kappa: no agreement (≤0), none to slight (0.01–0.20), fair (0.21–0.40), moderate (0.41–0.60), substantial (0.61–0.80), almost perfect (0.81–1.00). KEY RESULTS: For six raters, Kappa for the GS was 0.554 (moderate agreement) with 71.4% agreement; ICC = .597; for SP encounters, it was 0.637 (substantial) with 65.4% agreement; ICC = .647. Individual item agreement for SP encounters average was 0.605 (moderate), ranging from 0.142 (slight) to 1 (perfect). For five faculty raters, Kappa for the GS was 0.779 (substantial) with 85.7% agreement; ICC = .824; for resident SP encounters, it was 0.751 (substantial), with 76.9% agreement; ICC = .759. Individual item agreement on SP encounters average was 0.718 (substantial), ranging from 0.156 (slight) to 1 (perfect). CONCLUSION: We provide initial validity evidence for a modified T-BOT and recommendations for improvement. With further refinements to increase validity evidence, accompanied by shared understanding of TB and rating criteria, the T-BOT may be useful in strengthening approaches to teaching and improving essential TB skills among health care team members, thereby increasing organizational HL and improving outcomes. [HLRP: Health Literacy Research and Practice. 2023;7(4):e187–e196.]