Cargando…

Reliability of team-based self-monitoring in critical events: a pilot study

BACKGROUND: Teamwork is a critical component during critical events. Assessment is mandatory for remediation and to target training programmes for observed performance gaps. METHODS: The primary purpose was to test the feasibility of team-based self-monitoring of crisis resource management with a va...

Descripción completa

Detalles Bibliográficos
Autores principales: Stocker, Martin, Menadue, Lynda, Kakat, Suzan, De Costa, Kumi, Combes, Julie, Banya, Winston, Lane, Mary, Desai, Ajay, Burmester, Margarita
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2013
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4219174/
https://www.ncbi.nlm.nih.gov/pubmed/24289232
http://dx.doi.org/10.1186/1471-227X-13-22
Descripción
Sumario:BACKGROUND: Teamwork is a critical component during critical events. Assessment is mandatory for remediation and to target training programmes for observed performance gaps. METHODS: The primary purpose was to test the feasibility of team-based self-monitoring of crisis resource management with a validated teamwork assessment tool. A secondary purpose was to assess item-specific reliability and content validity in order to develop a modified context-optimised assessment tool. We conducted a prospective, single-centre study to assess team-based self-monitoring of teamwork after in-situ inter-professional simulated critical events by comparison with an assessment by observers. The Mayo High Performance Teamwork Scale (MHPTS) was used as the assessment tool with evaluation of internal consistency, item-specific consensus estimates for agreement between participating teams and observers, and content validity. RESULTS: 105 participants and 58 observers completed the MHPTS after a total of 16 simulated critical events over 8 months. Summative internal consistency of the MHPTS calculated as Cronbach’s alpha was acceptable with 0.712 for observers and 0.710 for participants. Overall consensus estimates for dichotomous data (agreement/non-agreement) was 0.62 (Cohen’s kappa; IQ-range 0.31-0.87). 6/16 items had excellent (kappa > 0.8) and 3/16 good reliability (kappa > 0.6). Short questions concerning easy to observe behaviours were more likely to be reliable. The MHPTS was modified using a threshold for good reliability of kappa > 0.6. The result is a 9 item self-assessment tool (TeamMonitor) with a calculated median kappa of 0.86 (IQ-range: 0.67-1.0) and good content validity. CONCLUSIONS: Team-based self-monitoring with the MHPTS to assess team performance during simulated critical events is feasible. A context-based modification of the tool is achievable with good internal consistency and content validity. Further studies are needed to investigate if team-based self-monitoring may be used as part of a programme of assessment to target training programmes for observed performance gaps.