Cargando…

Assessing practical skills in cardiopulmonary resuscitation: Discrepancy between standard visual evaluation and a mechanical feedback device

This paper aims to analyze agreement in the assessment of external chest compressions (ECC) by 3 human raters and dedicated feedback software. While 54 volunteer health workers (medical transport technicians), trained and experienced in cardiopulmonary resuscitation (CPR), performed a complete seque...

Descripción completa

Detalles Bibliográficos
Autores principales: González, Baltasar Sánchez, Martínez, Laura, Cerdà, Manel, Piacentini, Enrique, Trenado, Josep, Quintana, Salvador
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Wolters Kluwer Health 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5380293/
https://www.ncbi.nlm.nih.gov/pubmed/28353609
http://dx.doi.org/10.1097/MD.0000000000006515
_version_ 1782519755144429568
author González, Baltasar Sánchez
Martínez, Laura
Cerdà, Manel
Piacentini, Enrique
Trenado, Josep
Quintana, Salvador
author_facet González, Baltasar Sánchez
Martínez, Laura
Cerdà, Manel
Piacentini, Enrique
Trenado, Josep
Quintana, Salvador
author_sort González, Baltasar Sánchez
collection PubMed
description This paper aims to analyze agreement in the assessment of external chest compressions (ECC) by 3 human raters and dedicated feedback software. While 54 volunteer health workers (medical transport technicians), trained and experienced in cardiopulmonary resuscitation (CPR), performed a complete sequence of basic CPR maneuvers on a manikin incorporating feedback software (Laerdal PC v 4.2.1 Skill Reporting Software) (L), 3 expert CPR instructors (A, B, and C) visually assessed ECC, evaluating hand placement, compression depth, chest decompression, and rate. We analyzed the concordance among the raters (A, B, and C) and between the raters and L with Cohen's kappa coefficient (K), intraclass correlation coefficients (ICC), Bland–Altman plots, and survival–agreement plots. The agreement (expressed as Cohen's K and ICC) was ≥0.54 in only 3 instances and was ≤0.45 in more than half. Bland–Altman plots showed significant dispersion of the data. The survival–agreement plot showed a high degree of discordance between pairs of raters (A–L, B–L, and C–L) when the level of tolerance was set low. In visual assessment of ECC, there is a significant lack of agreement among accredited raters and significant dispersion and inconsistency in data, bringing into question the reliability and validity of this method of measurement.
format Online
Article
Text
id pubmed-5380293
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Wolters Kluwer Health
record_format MEDLINE/PubMed
spelling pubmed-53802932017-04-12 Assessing practical skills in cardiopulmonary resuscitation: Discrepancy between standard visual evaluation and a mechanical feedback device González, Baltasar Sánchez Martínez, Laura Cerdà, Manel Piacentini, Enrique Trenado, Josep Quintana, Salvador Medicine (Baltimore) 3900 This paper aims to analyze agreement in the assessment of external chest compressions (ECC) by 3 human raters and dedicated feedback software. While 54 volunteer health workers (medical transport technicians), trained and experienced in cardiopulmonary resuscitation (CPR), performed a complete sequence of basic CPR maneuvers on a manikin incorporating feedback software (Laerdal PC v 4.2.1 Skill Reporting Software) (L), 3 expert CPR instructors (A, B, and C) visually assessed ECC, evaluating hand placement, compression depth, chest decompression, and rate. We analyzed the concordance among the raters (A, B, and C) and between the raters and L with Cohen's kappa coefficient (K), intraclass correlation coefficients (ICC), Bland–Altman plots, and survival–agreement plots. The agreement (expressed as Cohen's K and ICC) was ≥0.54 in only 3 instances and was ≤0.45 in more than half. Bland–Altman plots showed significant dispersion of the data. The survival–agreement plot showed a high degree of discordance between pairs of raters (A–L, B–L, and C–L) when the level of tolerance was set low. In visual assessment of ECC, there is a significant lack of agreement among accredited raters and significant dispersion and inconsistency in data, bringing into question the reliability and validity of this method of measurement. Wolters Kluwer Health 2017-03-31 /pmc/articles/PMC5380293/ /pubmed/28353609 http://dx.doi.org/10.1097/MD.0000000000006515 Text en Copyright © 2017 the Author(s). Published by Wolters Kluwer Health, Inc. http://creativecommons.org/licenses/by/4.0 This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. http://creativecommons.org/licenses/by/4.0
spellingShingle 3900
González, Baltasar Sánchez
Martínez, Laura
Cerdà, Manel
Piacentini, Enrique
Trenado, Josep
Quintana, Salvador
Assessing practical skills in cardiopulmonary resuscitation: Discrepancy between standard visual evaluation and a mechanical feedback device
title Assessing practical skills in cardiopulmonary resuscitation: Discrepancy between standard visual evaluation and a mechanical feedback device
title_full Assessing practical skills in cardiopulmonary resuscitation: Discrepancy between standard visual evaluation and a mechanical feedback device
title_fullStr Assessing practical skills in cardiopulmonary resuscitation: Discrepancy between standard visual evaluation and a mechanical feedback device
title_full_unstemmed Assessing practical skills in cardiopulmonary resuscitation: Discrepancy between standard visual evaluation and a mechanical feedback device
title_short Assessing practical skills in cardiopulmonary resuscitation: Discrepancy between standard visual evaluation and a mechanical feedback device
title_sort assessing practical skills in cardiopulmonary resuscitation: discrepancy between standard visual evaluation and a mechanical feedback device
topic 3900
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5380293/
https://www.ncbi.nlm.nih.gov/pubmed/28353609
http://dx.doi.org/10.1097/MD.0000000000006515
work_keys_str_mv AT gonzalezbaltasarsanchez assessingpracticalskillsincardiopulmonaryresuscitationdiscrepancybetweenstandardvisualevaluationandamechanicalfeedbackdevice
AT martinezlaura assessingpracticalskillsincardiopulmonaryresuscitationdiscrepancybetweenstandardvisualevaluationandamechanicalfeedbackdevice
AT cerdamanel assessingpracticalskillsincardiopulmonaryresuscitationdiscrepancybetweenstandardvisualevaluationandamechanicalfeedbackdevice
AT piacentinienrique assessingpracticalskillsincardiopulmonaryresuscitationdiscrepancybetweenstandardvisualevaluationandamechanicalfeedbackdevice
AT trenadojosep assessingpracticalskillsincardiopulmonaryresuscitationdiscrepancybetweenstandardvisualevaluationandamechanicalfeedbackdevice
AT quintanasalvador assessingpracticalskillsincardiopulmonaryresuscitationdiscrepancybetweenstandardvisualevaluationandamechanicalfeedbackdevice