Cargando…
Deep knowledge tracing with learning curves
Knowledge tracing (KT) models students' mastery level of knowledge concepts based on their responses to the questions in the past and predicts the probability that they correctly answer subsequent questions in the future. Recent KT models are mostly developed with deep neural networks and have...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10097988/ https://www.ncbi.nlm.nih.gov/pubmed/37063528 http://dx.doi.org/10.3389/fpsyg.2023.1150329 |
_version_ | 1785024692462026752 |
---|---|
author | Su, Hang Liu, Xin Yang, Shanghui Lu, Xuesong |
author_facet | Su, Hang Liu, Xin Yang, Shanghui Lu, Xuesong |
author_sort | Su, Hang |
collection | PubMed |
description | Knowledge tracing (KT) models students' mastery level of knowledge concepts based on their responses to the questions in the past and predicts the probability that they correctly answer subsequent questions in the future. Recent KT models are mostly developed with deep neural networks and have demonstrated superior performance over traditional approaches. However, they ignore the explicit modeling of the learning curve theory, which generally says that more practices on the same knowledge concept enhance one's mastery level of the concept. Based on this theory, we propose a Convolution-Augmented Knowledge Tracing (CAKT) model and a Capsule-Enhanced CAKT (CECAKT) model to enable learning curve modeling. In particular, when predicting a student's response to the next question associated with a specific knowledge concept, CAKT uses a module built with three-dimensional convolutional neural networks to learn the student's recent experience on that concept, and CECAKT improves CAKT by replacing the global average pooling layer with capsule networks to prevent information loss. Moreover, the two models employ LSTM networks to learn the overall knowledge state, which is fused with the feature learned by the convolutional/capsule module. As such, the two models can learn the student's overall knowledge state as well as the knowledge state of the concept in the next question. Experimental results on four real-life datasets show that CAKT and CECAKT both achieve better performance compared to existing deep KT models. |
format | Online Article Text |
id | pubmed-10097988 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-100979882023-04-14 Deep knowledge tracing with learning curves Su, Hang Liu, Xin Yang, Shanghui Lu, Xuesong Front Psychol Psychology Knowledge tracing (KT) models students' mastery level of knowledge concepts based on their responses to the questions in the past and predicts the probability that they correctly answer subsequent questions in the future. Recent KT models are mostly developed with deep neural networks and have demonstrated superior performance over traditional approaches. However, they ignore the explicit modeling of the learning curve theory, which generally says that more practices on the same knowledge concept enhance one's mastery level of the concept. Based on this theory, we propose a Convolution-Augmented Knowledge Tracing (CAKT) model and a Capsule-Enhanced CAKT (CECAKT) model to enable learning curve modeling. In particular, when predicting a student's response to the next question associated with a specific knowledge concept, CAKT uses a module built with three-dimensional convolutional neural networks to learn the student's recent experience on that concept, and CECAKT improves CAKT by replacing the global average pooling layer with capsule networks to prevent information loss. Moreover, the two models employ LSTM networks to learn the overall knowledge state, which is fused with the feature learned by the convolutional/capsule module. As such, the two models can learn the student's overall knowledge state as well as the knowledge state of the concept in the next question. Experimental results on four real-life datasets show that CAKT and CECAKT both achieve better performance compared to existing deep KT models. Frontiers Media S.A. 2023-03-30 /pmc/articles/PMC10097988/ /pubmed/37063528 http://dx.doi.org/10.3389/fpsyg.2023.1150329 Text en Copyright © 2023 Su, Liu, Yang and Lu. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Su, Hang Liu, Xin Yang, Shanghui Lu, Xuesong Deep knowledge tracing with learning curves |
title | Deep knowledge tracing with learning curves |
title_full | Deep knowledge tracing with learning curves |
title_fullStr | Deep knowledge tracing with learning curves |
title_full_unstemmed | Deep knowledge tracing with learning curves |
title_short | Deep knowledge tracing with learning curves |
title_sort | deep knowledge tracing with learning curves |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10097988/ https://www.ncbi.nlm.nih.gov/pubmed/37063528 http://dx.doi.org/10.3389/fpsyg.2023.1150329 |
work_keys_str_mv | AT suhang deepknowledgetracingwithlearningcurves AT liuxin deepknowledgetracingwithlearningcurves AT yangshanghui deepknowledgetracingwithlearningcurves AT luxuesong deepknowledgetracingwithlearningcurves |