Cargando…

EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA

Multimodal Learning Analytics (MMLA) researchers are progressively employing machine learning (ML) techniques to develop predictive models to improve learning and teaching practices. These predictive models are often evaluated for their generalizability using methods from the ML domain, which do not...

Descripción completa

Detalles Bibliográficos
Autores principales: Chejara, Pankaj, Prieto, Luis P., Ruiz-Calleja, Adolfo, Rodríguez-Triana, María Jesús, Shankar, Shashi Kant, Kasepalu, Reet
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8073259/
https://www.ncbi.nlm.nih.gov/pubmed/33921782
http://dx.doi.org/10.3390/s21082863
_version_ 1783684089011765248
author Chejara, Pankaj
Prieto, Luis P.
Ruiz-Calleja, Adolfo
Rodríguez-Triana, María Jesús
Shankar, Shashi Kant
Kasepalu, Reet
author_facet Chejara, Pankaj
Prieto, Luis P.
Ruiz-Calleja, Adolfo
Rodríguez-Triana, María Jesús
Shankar, Shashi Kant
Kasepalu, Reet
author_sort Chejara, Pankaj
collection PubMed
description Multimodal Learning Analytics (MMLA) researchers are progressively employing machine learning (ML) techniques to develop predictive models to improve learning and teaching practices. These predictive models are often evaluated for their generalizability using methods from the ML domain, which do not take into account MMLA’s educational nature. Furthermore, there is a lack of systematization in model evaluation in MMLA, which is also reflected in the heterogeneous reporting of the evaluation results. To overcome these issues, this paper proposes an evaluation framework to assess and report the generalizability of ML models in MMLA (EFAR-MMLA). To illustrate the usefulness of EFAR-MMLA, we present a case study with two datasets, each with audio and log data collected from a classroom during a collaborative learning session. In this case study, regression models are developed for collaboration quality and its sub-dimensions, and their generalizability is evaluated and reported. The framework helped us to systematically detect and report that the models achieved better performance when evaluated using hold-out or cross-validation but quickly degraded when evaluated across different student groups and learning contexts. The framework helps to open up a “wicked problem” in MMLA research that remains fuzzy (i.e., the generalizability of ML models), which is critical to both accumulating knowledge in the research community and demonstrating the practical relevance of these techniques.
format Online
Article
Text
id pubmed-8073259
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-80732592021-04-27 EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA Chejara, Pankaj Prieto, Luis P. Ruiz-Calleja, Adolfo Rodríguez-Triana, María Jesús Shankar, Shashi Kant Kasepalu, Reet Sensors (Basel) Article Multimodal Learning Analytics (MMLA) researchers are progressively employing machine learning (ML) techniques to develop predictive models to improve learning and teaching practices. These predictive models are often evaluated for their generalizability using methods from the ML domain, which do not take into account MMLA’s educational nature. Furthermore, there is a lack of systematization in model evaluation in MMLA, which is also reflected in the heterogeneous reporting of the evaluation results. To overcome these issues, this paper proposes an evaluation framework to assess and report the generalizability of ML models in MMLA (EFAR-MMLA). To illustrate the usefulness of EFAR-MMLA, we present a case study with two datasets, each with audio and log data collected from a classroom during a collaborative learning session. In this case study, regression models are developed for collaboration quality and its sub-dimensions, and their generalizability is evaluated and reported. The framework helped us to systematically detect and report that the models achieved better performance when evaluated using hold-out or cross-validation but quickly degraded when evaluated across different student groups and learning contexts. The framework helps to open up a “wicked problem” in MMLA research that remains fuzzy (i.e., the generalizability of ML models), which is critical to both accumulating knowledge in the research community and demonstrating the practical relevance of these techniques. MDPI 2021-04-19 /pmc/articles/PMC8073259/ /pubmed/33921782 http://dx.doi.org/10.3390/s21082863 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Chejara, Pankaj
Prieto, Luis P.
Ruiz-Calleja, Adolfo
Rodríguez-Triana, María Jesús
Shankar, Shashi Kant
Kasepalu, Reet
EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA
title EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA
title_full EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA
title_fullStr EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA
title_full_unstemmed EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA
title_short EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA
title_sort efar-mmla: an evaluation framework to assess and report generalizability of machine learning models in mmla
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8073259/
https://www.ncbi.nlm.nih.gov/pubmed/33921782
http://dx.doi.org/10.3390/s21082863
work_keys_str_mv AT chejarapankaj efarmmlaanevaluationframeworktoassessandreportgeneralizabilityofmachinelearningmodelsinmmla
AT prietoluisp efarmmlaanevaluationframeworktoassessandreportgeneralizabilityofmachinelearningmodelsinmmla
AT ruizcallejaadolfo efarmmlaanevaluationframeworktoassessandreportgeneralizabilityofmachinelearningmodelsinmmla
AT rodrigueztrianamariajesus efarmmlaanevaluationframeworktoassessandreportgeneralizabilityofmachinelearningmodelsinmmla
AT shankarshashikant efarmmlaanevaluationframeworktoassessandreportgeneralizabilityofmachinelearningmodelsinmmla
AT kasepalureet efarmmlaanevaluationframeworktoassessandreportgeneralizabilityofmachinelearningmodelsinmmla