Cargando…

Evaluation of MCQs from MOOCs for common item writing flaws

OBJECTIVE: There is a dearth of research into the quality of assessments based on Multiple Choice Question (MCQ) items in Massive Open Online Courses (MOOCs). This dataset was generated to determine whether MCQ item writing flaws existed in a selection of MOOC assessments, and to evaluate their prev...

Descripción completa

Detalles Bibliográficos
Autores principales: Costello, Eamon, Holland, Jane C., Kirwan, Colette
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6276217/
https://www.ncbi.nlm.nih.gov/pubmed/30509321
http://dx.doi.org/10.1186/s13104-018-3959-4
_version_ 1783377970893684736
author Costello, Eamon
Holland, Jane C.
Kirwan, Colette
author_facet Costello, Eamon
Holland, Jane C.
Kirwan, Colette
author_sort Costello, Eamon
collection PubMed
description OBJECTIVE: There is a dearth of research into the quality of assessments based on Multiple Choice Question (MCQ) items in Massive Open Online Courses (MOOCs). This dataset was generated to determine whether MCQ item writing flaws existed in a selection of MOOC assessments, and to evaluate their prevalence if so. Hence, researchers reviewed MCQs from a sample of MOOCs, using an evaluation protocol derived from the medical health education literature, which has an extensive evidence-base with regard to writing quality MCQ items. DATA DESCRIPTION: This dataset was collated from MCQ items in 18 MOOCs in the areas of medical health education, life sciences and computer science. Two researchers critically reviewed 204 questions using an evidence-based evaluation protocol. In the data presented, 50% of the MCQs (112) have one or more item writing flaw, while 28% of MCQs (57) contain two or more flaws. Thus, a majority of the MCQs in the dataset violate item-writing guidelines, which mirrors findings of previous research that examined rates of flaws in MCQs in traditional formal educational contexts.
format Online
Article
Text
id pubmed-6276217
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-62762172018-12-06 Evaluation of MCQs from MOOCs for common item writing flaws Costello, Eamon Holland, Jane C. Kirwan, Colette BMC Res Notes Data Note OBJECTIVE: There is a dearth of research into the quality of assessments based on Multiple Choice Question (MCQ) items in Massive Open Online Courses (MOOCs). This dataset was generated to determine whether MCQ item writing flaws existed in a selection of MOOC assessments, and to evaluate their prevalence if so. Hence, researchers reviewed MCQs from a sample of MOOCs, using an evaluation protocol derived from the medical health education literature, which has an extensive evidence-base with regard to writing quality MCQ items. DATA DESCRIPTION: This dataset was collated from MCQ items in 18 MOOCs in the areas of medical health education, life sciences and computer science. Two researchers critically reviewed 204 questions using an evidence-based evaluation protocol. In the data presented, 50% of the MCQs (112) have one or more item writing flaw, while 28% of MCQs (57) contain two or more flaws. Thus, a majority of the MCQs in the dataset violate item-writing guidelines, which mirrors findings of previous research that examined rates of flaws in MCQs in traditional formal educational contexts. BioMed Central 2018-12-03 /pmc/articles/PMC6276217/ /pubmed/30509321 http://dx.doi.org/10.1186/s13104-018-3959-4 Text en © The Author(s) 2018 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
spellingShingle Data Note
Costello, Eamon
Holland, Jane C.
Kirwan, Colette
Evaluation of MCQs from MOOCs for common item writing flaws
title Evaluation of MCQs from MOOCs for common item writing flaws
title_full Evaluation of MCQs from MOOCs for common item writing flaws
title_fullStr Evaluation of MCQs from MOOCs for common item writing flaws
title_full_unstemmed Evaluation of MCQs from MOOCs for common item writing flaws
title_short Evaluation of MCQs from MOOCs for common item writing flaws
title_sort evaluation of mcqs from moocs for common item writing flaws
topic Data Note
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6276217/
https://www.ncbi.nlm.nih.gov/pubmed/30509321
http://dx.doi.org/10.1186/s13104-018-3959-4
work_keys_str_mv AT costelloeamon evaluationofmcqsfrommoocsforcommonitemwritingflaws
AT hollandjanec evaluationofmcqsfrommoocsforcommonitemwritingflaws
AT kirwancolette evaluationofmcqsfrommoocsforcommonitemwritingflaws