Cargando…

Massive Open Online Course Evaluation Methods: Systematic Review

BACKGROUND: Massive open online courses (MOOCs) have the potential to make a broader educational impact because many learners undertake these courses. Despite their reach, there is a lack of knowledge about which methods are used for evaluating these courses. OBJECTIVE: The aim of this review was to...

Descripción completa

Detalles Bibliográficos
Autores principales: Alturkistani, Abrar, Lam, Ching, Foley, Kimberley, Stenfors, Terese, Blum, Elizabeth R, Van Velthoven, Michelle Helena, Meinert, Edward
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7215503/
https://www.ncbi.nlm.nih.gov/pubmed/32338618
http://dx.doi.org/10.2196/13851
_version_ 1783532202919723008
author Alturkistani, Abrar
Lam, Ching
Foley, Kimberley
Stenfors, Terese
Blum, Elizabeth R
Van Velthoven, Michelle Helena
Meinert, Edward
author_facet Alturkistani, Abrar
Lam, Ching
Foley, Kimberley
Stenfors, Terese
Blum, Elizabeth R
Van Velthoven, Michelle Helena
Meinert, Edward
author_sort Alturkistani, Abrar
collection PubMed
description BACKGROUND: Massive open online courses (MOOCs) have the potential to make a broader educational impact because many learners undertake these courses. Despite their reach, there is a lack of knowledge about which methods are used for evaluating these courses. OBJECTIVE: The aim of this review was to identify current MOOC evaluation methods to inform future study designs. METHODS: We systematically searched the following databases for studies published from January 2008 to October 2018: (1) Scopus, (2) Education Resources Information Center, (3) IEEE (Institute of Electrical and Electronic Engineers) Xplore, (4) PubMed, (5) Web of Science, (6) British Education Index, and (7) Google Scholar search engine. Two reviewers independently screened the abstracts and titles of the studies. Published studies in the English language that evaluated MOOCs were included. The study design of the evaluations, the underlying motivation for the evaluation studies, data collection, and data analysis methods were quantitatively and qualitatively analyzed. The quality of the included studies was appraised using the Cochrane Collaboration Risk of Bias Tool for randomized controlled trials (RCTs) and the National Institutes of Health—National Heart, Lung, and Blood Institute quality assessment tool for cohort observational studies and for before-after (pre-post) studies with no control group. RESULTS: The initial search resulted in 3275 studies, and 33 eligible studies were included in this review. In total, 16 studies used a quantitative study design, 11 used a qualitative design, and 6 used a mixed methods study design. In all, 16 studies evaluated learner characteristics and behavior, and 20 studies evaluated learning outcomes and experiences. A total of 12 studies used 1 data source, 11 used 2 data sources, 7 used 3 data sources, 4 used 2 data sources, and 1 used 5 data sources. Overall, 3 studies used more than 3 data sources in their evaluation. In terms of the data analysis methods, quantitative methods were most prominent with descriptive and inferential statistics, which were the top 2 preferred methods. In all, 26 studies with a cross-sectional design had a low-quality assessment, whereas RCTs and quasi-experimental studies received a high-quality assessment. CONCLUSIONS: The MOOC evaluation data collection and data analysis methods should be determined carefully on the basis of the aim of the evaluation. The MOOC evaluations are subject to bias, which could be reduced using pre-MOOC measures for comparison or by controlling for confounding variables. Future MOOC evaluations should consider using more diverse data sources and data analysis methods. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.2196/12087
format Online
Article
Text
id pubmed-7215503
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-72155032020-05-15 Massive Open Online Course Evaluation Methods: Systematic Review Alturkistani, Abrar Lam, Ching Foley, Kimberley Stenfors, Terese Blum, Elizabeth R Van Velthoven, Michelle Helena Meinert, Edward J Med Internet Res Review BACKGROUND: Massive open online courses (MOOCs) have the potential to make a broader educational impact because many learners undertake these courses. Despite their reach, there is a lack of knowledge about which methods are used for evaluating these courses. OBJECTIVE: The aim of this review was to identify current MOOC evaluation methods to inform future study designs. METHODS: We systematically searched the following databases for studies published from January 2008 to October 2018: (1) Scopus, (2) Education Resources Information Center, (3) IEEE (Institute of Electrical and Electronic Engineers) Xplore, (4) PubMed, (5) Web of Science, (6) British Education Index, and (7) Google Scholar search engine. Two reviewers independently screened the abstracts and titles of the studies. Published studies in the English language that evaluated MOOCs were included. The study design of the evaluations, the underlying motivation for the evaluation studies, data collection, and data analysis methods were quantitatively and qualitatively analyzed. The quality of the included studies was appraised using the Cochrane Collaboration Risk of Bias Tool for randomized controlled trials (RCTs) and the National Institutes of Health—National Heart, Lung, and Blood Institute quality assessment tool for cohort observational studies and for before-after (pre-post) studies with no control group. RESULTS: The initial search resulted in 3275 studies, and 33 eligible studies were included in this review. In total, 16 studies used a quantitative study design, 11 used a qualitative design, and 6 used a mixed methods study design. In all, 16 studies evaluated learner characteristics and behavior, and 20 studies evaluated learning outcomes and experiences. A total of 12 studies used 1 data source, 11 used 2 data sources, 7 used 3 data sources, 4 used 2 data sources, and 1 used 5 data sources. Overall, 3 studies used more than 3 data sources in their evaluation. In terms of the data analysis methods, quantitative methods were most prominent with descriptive and inferential statistics, which were the top 2 preferred methods. In all, 26 studies with a cross-sectional design had a low-quality assessment, whereas RCTs and quasi-experimental studies received a high-quality assessment. CONCLUSIONS: The MOOC evaluation data collection and data analysis methods should be determined carefully on the basis of the aim of the evaluation. The MOOC evaluations are subject to bias, which could be reduced using pre-MOOC measures for comparison or by controlling for confounding variables. Future MOOC evaluations should consider using more diverse data sources and data analysis methods. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.2196/12087 JMIR Publications 2020-04-27 /pmc/articles/PMC7215503/ /pubmed/32338618 http://dx.doi.org/10.2196/13851 Text en ©Abrar Alturkistani, Ching Lam, Kimberley Foley, Terese Stenfors, Elizabeth R Blum, Michelle Helena Van Velthoven, Edward Meinert. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 27.04.2020. https://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Review
Alturkistani, Abrar
Lam, Ching
Foley, Kimberley
Stenfors, Terese
Blum, Elizabeth R
Van Velthoven, Michelle Helena
Meinert, Edward
Massive Open Online Course Evaluation Methods: Systematic Review
title Massive Open Online Course Evaluation Methods: Systematic Review
title_full Massive Open Online Course Evaluation Methods: Systematic Review
title_fullStr Massive Open Online Course Evaluation Methods: Systematic Review
title_full_unstemmed Massive Open Online Course Evaluation Methods: Systematic Review
title_short Massive Open Online Course Evaluation Methods: Systematic Review
title_sort massive open online course evaluation methods: systematic review
topic Review
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7215503/
https://www.ncbi.nlm.nih.gov/pubmed/32338618
http://dx.doi.org/10.2196/13851
work_keys_str_mv AT alturkistaniabrar massiveopenonlinecourseevaluationmethodssystematicreview
AT lamching massiveopenonlinecourseevaluationmethodssystematicreview
AT foleykimberley massiveopenonlinecourseevaluationmethodssystematicreview
AT stenforsterese massiveopenonlinecourseevaluationmethodssystematicreview
AT blumelizabethr massiveopenonlinecourseevaluationmethodssystematicreview
AT vanvelthovenmichellehelena massiveopenonlinecourseevaluationmethodssystematicreview
AT meinertedward massiveopenonlinecourseevaluationmethodssystematicreview