Cargando…
Content validation of an interprofessional learning video peer assessment tool
BACKGROUND: Large scale models of interprofessional learning (IPL) where outcomes are assessed are rare within health professional curricula. To date, there is sparse research describing robust assessment strategies to support such activities. We describe the development of an IPL assessment task ba...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2017
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5732409/ https://www.ncbi.nlm.nih.gov/pubmed/29246261 http://dx.doi.org/10.1186/s12909-017-1099-5 |
_version_ | 1783286690550382592 |
---|---|
author | Nisbet, Gillian Jorm, Christine Roberts, Chris Gordon, Christopher J. Chen, Timothy F. |
author_facet | Nisbet, Gillian Jorm, Christine Roberts, Chris Gordon, Christopher J. Chen, Timothy F. |
author_sort | Nisbet, Gillian |
collection | PubMed |
description | BACKGROUND: Large scale models of interprofessional learning (IPL) where outcomes are assessed are rare within health professional curricula. To date, there is sparse research describing robust assessment strategies to support such activities. We describe the development of an IPL assessment task based on peer rating of a student generated video evidencing collaborative interprofessional practice. We provide content validation evidence of an assessment rubric in the context of large scale IPL. METHODS: Two established approaches to scale development in an educational setting were combined. A literature review was undertaken to develop a conceptual model of the relevant domains and issues pertaining to assessment of student generated videos within IPL. Starting with a prototype rubric developed from the literature, a series of staff and student workshops were undertaken to integrate expert opinion and user perspectives. Participants assessed five-minute videos produced in a prior pilot IPL activity. Outcomes from each workshop informed the next version of the rubric until agreement was reached on anchoring statements and criteria. At this point the rubric was declared fit to be used in the upcoming mandatory large scale IPL activity. RESULTS: The assessment rubric consisted of four domains: patient issues, interprofessional negotiation; interprofessional management plan in action; and effective use of video medium to engage audience. The first three domains reflected topic content relevant to the underlying construct of interprofessional collaborative practice. The fourth domain was consistent with the broader video assessment literature calling for greater emphasis on creativity in education. CONCLUSIONS: We have provided evidence for the content validity of a video-based peer assessment task portraying interprofessional collaborative practice in the context of large-scale IPL activities for healthcare professional students. Further research is needed to establish the reliability of such a scale. |
format | Online Article Text |
id | pubmed-5732409 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2017 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-57324092017-12-21 Content validation of an interprofessional learning video peer assessment tool Nisbet, Gillian Jorm, Christine Roberts, Chris Gordon, Christopher J. Chen, Timothy F. BMC Med Educ Research Article BACKGROUND: Large scale models of interprofessional learning (IPL) where outcomes are assessed are rare within health professional curricula. To date, there is sparse research describing robust assessment strategies to support such activities. We describe the development of an IPL assessment task based on peer rating of a student generated video evidencing collaborative interprofessional practice. We provide content validation evidence of an assessment rubric in the context of large scale IPL. METHODS: Two established approaches to scale development in an educational setting were combined. A literature review was undertaken to develop a conceptual model of the relevant domains and issues pertaining to assessment of student generated videos within IPL. Starting with a prototype rubric developed from the literature, a series of staff and student workshops were undertaken to integrate expert opinion and user perspectives. Participants assessed five-minute videos produced in a prior pilot IPL activity. Outcomes from each workshop informed the next version of the rubric until agreement was reached on anchoring statements and criteria. At this point the rubric was declared fit to be used in the upcoming mandatory large scale IPL activity. RESULTS: The assessment rubric consisted of four domains: patient issues, interprofessional negotiation; interprofessional management plan in action; and effective use of video medium to engage audience. The first three domains reflected topic content relevant to the underlying construct of interprofessional collaborative practice. The fourth domain was consistent with the broader video assessment literature calling for greater emphasis on creativity in education. CONCLUSIONS: We have provided evidence for the content validity of a video-based peer assessment task portraying interprofessional collaborative practice in the context of large-scale IPL activities for healthcare professional students. Further research is needed to establish the reliability of such a scale. BioMed Central 2017-12-16 /pmc/articles/PMC5732409/ /pubmed/29246261 http://dx.doi.org/10.1186/s12909-017-1099-5 Text en © The Author(s). 2017 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. |
spellingShingle | Research Article Nisbet, Gillian Jorm, Christine Roberts, Chris Gordon, Christopher J. Chen, Timothy F. Content validation of an interprofessional learning video peer assessment tool |
title | Content validation of an interprofessional learning video peer assessment tool |
title_full | Content validation of an interprofessional learning video peer assessment tool |
title_fullStr | Content validation of an interprofessional learning video peer assessment tool |
title_full_unstemmed | Content validation of an interprofessional learning video peer assessment tool |
title_short | Content validation of an interprofessional learning video peer assessment tool |
title_sort | content validation of an interprofessional learning video peer assessment tool |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5732409/ https://www.ncbi.nlm.nih.gov/pubmed/29246261 http://dx.doi.org/10.1186/s12909-017-1099-5 |
work_keys_str_mv | AT nisbetgillian contentvalidationofaninterprofessionallearningvideopeerassessmenttool AT jormchristine contentvalidationofaninterprofessionallearningvideopeerassessmenttool AT robertschris contentvalidationofaninterprofessionallearningvideopeerassessmenttool AT gordonchristopherj contentvalidationofaninterprofessionallearningvideopeerassessmenttool AT chentimothyf contentvalidationofaninterprofessionallearningvideopeerassessmenttool |