Cargando…
Evaluating the reliability of gestalt quality ratings of medical education podcasts: A METRIQ study
INTRODUCTION: Podcasts are increasingly being used for medical education. Studies have found that the assessment of the quality of online resources can be challenging. We sought to determine the reliability of gestalt quality assessment of education podcasts in emergency medicine. METHODS: An intern...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Bohn Stafleu van Loghum
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7550476/ https://www.ncbi.nlm.nih.gov/pubmed/32495235 http://dx.doi.org/10.1007/s40037-020-00589-x |
_version_ | 1783592981393047552 |
---|---|
author | Woods, Jason M. Chan, Teresa M. Roland, Damian Riddell, Jeff Tagg, Andrew Thoma, Brent |
author_facet | Woods, Jason M. Chan, Teresa M. Roland, Damian Riddell, Jeff Tagg, Andrew Thoma, Brent |
author_sort | Woods, Jason M. |
collection | PubMed |
description | INTRODUCTION: Podcasts are increasingly being used for medical education. Studies have found that the assessment of the quality of online resources can be challenging. We sought to determine the reliability of gestalt quality assessment of education podcasts in emergency medicine. METHODS: An international, interprofessional sample of raters was recruited through social media, direct contact, and the extended personal network of the study team. Each participant listened to eight podcasts (selected to include a variety of accents, number of speakers, and topics) and rated the quality of that podcast on a seven-point Likert scale. Phi coefficients were calculated within each group and overall. Decision studies were conducted using a phi of 0.8. RESULTS: A total of 240 collaborators completed all eight surveys and were included in the analysis. Attendings, medical students, and physician assistants had the lowest individual-level variance and thus the lowest number of required raters to reliably evaluate quality (phi >0.80). Overall, 20 raters were required to reliably evaluate the quality of emergency medicine podcasts. DISCUSSION: Gestalt ratings of quality from approximately 20 health professionals are required to reliably assess the quality of a podcast. This finding should inform future work focused on developing and validating tools to support the evaluation of quality in these resources. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1007/s40037-020-00589-x) contains supplementary material, which is available to authorized users. |
format | Online Article Text |
id | pubmed-7550476 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Bohn Stafleu van Loghum |
record_format | MEDLINE/PubMed |
spelling | pubmed-75504762020-10-19 Evaluating the reliability of gestalt quality ratings of medical education podcasts: A METRIQ study Woods, Jason M. Chan, Teresa M. Roland, Damian Riddell, Jeff Tagg, Andrew Thoma, Brent Perspect Med Educ Original Article INTRODUCTION: Podcasts are increasingly being used for medical education. Studies have found that the assessment of the quality of online resources can be challenging. We sought to determine the reliability of gestalt quality assessment of education podcasts in emergency medicine. METHODS: An international, interprofessional sample of raters was recruited through social media, direct contact, and the extended personal network of the study team. Each participant listened to eight podcasts (selected to include a variety of accents, number of speakers, and topics) and rated the quality of that podcast on a seven-point Likert scale. Phi coefficients were calculated within each group and overall. Decision studies were conducted using a phi of 0.8. RESULTS: A total of 240 collaborators completed all eight surveys and were included in the analysis. Attendings, medical students, and physician assistants had the lowest individual-level variance and thus the lowest number of required raters to reliably evaluate quality (phi >0.80). Overall, 20 raters were required to reliably evaluate the quality of emergency medicine podcasts. DISCUSSION: Gestalt ratings of quality from approximately 20 health professionals are required to reliably assess the quality of a podcast. This finding should inform future work focused on developing and validating tools to support the evaluation of quality in these resources. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1007/s40037-020-00589-x) contains supplementary material, which is available to authorized users. Bohn Stafleu van Loghum 2020-06-03 2020-10 /pmc/articles/PMC7550476/ /pubmed/32495235 http://dx.doi.org/10.1007/s40037-020-00589-x Text en © The Author(s) 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. |
spellingShingle | Original Article Woods, Jason M. Chan, Teresa M. Roland, Damian Riddell, Jeff Tagg, Andrew Thoma, Brent Evaluating the reliability of gestalt quality ratings of medical education podcasts: A METRIQ study |
title | Evaluating the reliability of gestalt quality ratings of medical education podcasts: A METRIQ study |
title_full | Evaluating the reliability of gestalt quality ratings of medical education podcasts: A METRIQ study |
title_fullStr | Evaluating the reliability of gestalt quality ratings of medical education podcasts: A METRIQ study |
title_full_unstemmed | Evaluating the reliability of gestalt quality ratings of medical education podcasts: A METRIQ study |
title_short | Evaluating the reliability of gestalt quality ratings of medical education podcasts: A METRIQ study |
title_sort | evaluating the reliability of gestalt quality ratings of medical education podcasts: a metriq study |
topic | Original Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7550476/ https://www.ncbi.nlm.nih.gov/pubmed/32495235 http://dx.doi.org/10.1007/s40037-020-00589-x |
work_keys_str_mv | AT woodsjasonm evaluatingthereliabilityofgestaltqualityratingsofmedicaleducationpodcastsametriqstudy AT chanteresam evaluatingthereliabilityofgestaltqualityratingsofmedicaleducationpodcastsametriqstudy AT rolanddamian evaluatingthereliabilityofgestaltqualityratingsofmedicaleducationpodcastsametriqstudy AT riddelljeff evaluatingthereliabilityofgestaltqualityratingsofmedicaleducationpodcastsametriqstudy AT taggandrew evaluatingthereliabilityofgestaltqualityratingsofmedicaleducationpodcastsametriqstudy AT thomabrent evaluatingthereliabilityofgestaltqualityratingsofmedicaleducationpodcastsametriqstudy |