Cargando…

Evaluating Crowdsourcing and Topic Modeling in Generating Knowledge Components from Explanations

Associating assessment items with hypothesized knowledge components (KCs) enables us to gain fine-grained data on students’ performance within an ed-tech system. However, creating this association is a time consuming process and requires substantial instructor effort. In this study, we present the r...

Descripción completa

Detalles Bibliográficos
Autores principales: Moore, Steven, Nguyen, Huy A., Stamper, John
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7334146/
http://dx.doi.org/10.1007/978-3-030-52237-7_32
_version_ 1783553874202722304
author Moore, Steven
Nguyen, Huy A.
Stamper, John
author_facet Moore, Steven
Nguyen, Huy A.
Stamper, John
author_sort Moore, Steven
collection PubMed
description Associating assessment items with hypothesized knowledge components (KCs) enables us to gain fine-grained data on students’ performance within an ed-tech system. However, creating this association is a time consuming process and requires substantial instructor effort. In this study, we present the results of crowdsourcing valuable insights into the underlying concepts of problems in mathematics and English writing, as a first step in leveraging the crowd to expedite the task of generating KCs. We presented crowdworkers with two problems in each domain and asked them to provide three explanations about why one problem is more challenging than the other. These explanations were then independently analyzed through (1) a series of qualitative coding methods and (2) several topic modeling techniques, to compare how they might assist in extracting KCs and other insights from the participant contributions. Results of our qualitative coding showed that crowdworkers were able to generate KCs that approximately matched those generated by domain experts. At the same time, the topic models’ outputs were evaluated against both the domain expert generated KCs and the results of the previous coding to determine effectiveness. Ultimately we found that while the topic modeling was not up to parity with the qualitative coding methods, it did assist in identifying useful clusters of explanations. This work demonstrates a method to leverage both the crowd’s knowledge and topic modeling to assist in the process of generating KCs for assessment items.
format Online
Article
Text
id pubmed-7334146
institution National Center for Biotechnology Information
language English
publishDate 2020
record_format MEDLINE/PubMed
spelling pubmed-73341462020-07-06 Evaluating Crowdsourcing and Topic Modeling in Generating Knowledge Components from Explanations Moore, Steven Nguyen, Huy A. Stamper, John Artificial Intelligence in Education Article Associating assessment items with hypothesized knowledge components (KCs) enables us to gain fine-grained data on students’ performance within an ed-tech system. However, creating this association is a time consuming process and requires substantial instructor effort. In this study, we present the results of crowdsourcing valuable insights into the underlying concepts of problems in mathematics and English writing, as a first step in leveraging the crowd to expedite the task of generating KCs. We presented crowdworkers with two problems in each domain and asked them to provide three explanations about why one problem is more challenging than the other. These explanations were then independently analyzed through (1) a series of qualitative coding methods and (2) several topic modeling techniques, to compare how they might assist in extracting KCs and other insights from the participant contributions. Results of our qualitative coding showed that crowdworkers were able to generate KCs that approximately matched those generated by domain experts. At the same time, the topic models’ outputs were evaluated against both the domain expert generated KCs and the results of the previous coding to determine effectiveness. Ultimately we found that while the topic modeling was not up to parity with the qualitative coding methods, it did assist in identifying useful clusters of explanations. This work demonstrates a method to leverage both the crowd’s knowledge and topic modeling to assist in the process of generating KCs for assessment items. 2020-06-09 /pmc/articles/PMC7334146/ http://dx.doi.org/10.1007/978-3-030-52237-7_32 Text en © Springer Nature Switzerland AG 2020 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Moore, Steven
Nguyen, Huy A.
Stamper, John
Evaluating Crowdsourcing and Topic Modeling in Generating Knowledge Components from Explanations
title Evaluating Crowdsourcing and Topic Modeling in Generating Knowledge Components from Explanations
title_full Evaluating Crowdsourcing and Topic Modeling in Generating Knowledge Components from Explanations
title_fullStr Evaluating Crowdsourcing and Topic Modeling in Generating Knowledge Components from Explanations
title_full_unstemmed Evaluating Crowdsourcing and Topic Modeling in Generating Knowledge Components from Explanations
title_short Evaluating Crowdsourcing and Topic Modeling in Generating Knowledge Components from Explanations
title_sort evaluating crowdsourcing and topic modeling in generating knowledge components from explanations
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7334146/
http://dx.doi.org/10.1007/978-3-030-52237-7_32
work_keys_str_mv AT mooresteven evaluatingcrowdsourcingandtopicmodelingingeneratingknowledgecomponentsfromexplanations
AT nguyenhuya evaluatingcrowdsourcingandtopicmodelingingeneratingknowledgecomponentsfromexplanations
AT stamperjohn evaluatingcrowdsourcingandtopicmodelingingeneratingknowledgecomponentsfromexplanations