Cargando…

Embedding cognitive framework with self-attention for interpretable knowledge tracing

Recently, deep neural network-based cognitive models such as deep knowledge tracing have been introduced into the field of learning analytics and educational data mining. Despite an accurate predictive performance of such models, it is challenging to interpret their behaviors and obtain an intuitive...

Descripción completa

Detalles Bibliográficos
Autores principales: Pu, Yanjun, Wu, Wenjun, Peng, Tianhao, Liu, Fang, Liang, Yu, Yu, Xin, Chen, Ruibo, Feng, Pu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9584970/
https://www.ncbi.nlm.nih.gov/pubmed/36266397
http://dx.doi.org/10.1038/s41598-022-22539-9
_version_ 1784813393523245056
author Pu, Yanjun
Wu, Wenjun
Peng, Tianhao
Liu, Fang
Liang, Yu
Yu, Xin
Chen, Ruibo
Feng, Pu
author_facet Pu, Yanjun
Wu, Wenjun
Peng, Tianhao
Liu, Fang
Liang, Yu
Yu, Xin
Chen, Ruibo
Feng, Pu
author_sort Pu, Yanjun
collection PubMed
description Recently, deep neural network-based cognitive models such as deep knowledge tracing have been introduced into the field of learning analytics and educational data mining. Despite an accurate predictive performance of such models, it is challenging to interpret their behaviors and obtain an intuitive insight into latent student learning status. To address these challenges, this paper proposes a new learner modeling framework named the EAKT, which embeds a structured cognitive model into a transformer. In this way, the EAKT not only can achieve an excellent prediction result of learning outcome but also can depict students’ knowledge state on a multi-dimensional knowledge component(KC) level. By performing the fine-grained analysis of the student learning process, the proposed framework provides better explanatory learner models for designing and implementing intelligent tutoring systems. The proposed EAKT is verified by experiments. The performance experiments show that the EAKT can better predict the future performance of student learning(more than 2.6% higher than the baseline method on two of three real-world datasets). The interpretability experiments demonstrate that the student knowledge state obtained by EAKT is closer to ground truth than other models, which means EAKT can more accurately trace changes in the students’ knowledge state.
format Online
Article
Text
id pubmed-9584970
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-95849702022-10-22 Embedding cognitive framework with self-attention for interpretable knowledge tracing Pu, Yanjun Wu, Wenjun Peng, Tianhao Liu, Fang Liang, Yu Yu, Xin Chen, Ruibo Feng, Pu Sci Rep Article Recently, deep neural network-based cognitive models such as deep knowledge tracing have been introduced into the field of learning analytics and educational data mining. Despite an accurate predictive performance of such models, it is challenging to interpret their behaviors and obtain an intuitive insight into latent student learning status. To address these challenges, this paper proposes a new learner modeling framework named the EAKT, which embeds a structured cognitive model into a transformer. In this way, the EAKT not only can achieve an excellent prediction result of learning outcome but also can depict students’ knowledge state on a multi-dimensional knowledge component(KC) level. By performing the fine-grained analysis of the student learning process, the proposed framework provides better explanatory learner models for designing and implementing intelligent tutoring systems. The proposed EAKT is verified by experiments. The performance experiments show that the EAKT can better predict the future performance of student learning(more than 2.6% higher than the baseline method on two of three real-world datasets). The interpretability experiments demonstrate that the student knowledge state obtained by EAKT is closer to ground truth than other models, which means EAKT can more accurately trace changes in the students’ knowledge state. Nature Publishing Group UK 2022-10-20 /pmc/articles/PMC9584970/ /pubmed/36266397 http://dx.doi.org/10.1038/s41598-022-22539-9 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Pu, Yanjun
Wu, Wenjun
Peng, Tianhao
Liu, Fang
Liang, Yu
Yu, Xin
Chen, Ruibo
Feng, Pu
Embedding cognitive framework with self-attention for interpretable knowledge tracing
title Embedding cognitive framework with self-attention for interpretable knowledge tracing
title_full Embedding cognitive framework with self-attention for interpretable knowledge tracing
title_fullStr Embedding cognitive framework with self-attention for interpretable knowledge tracing
title_full_unstemmed Embedding cognitive framework with self-attention for interpretable knowledge tracing
title_short Embedding cognitive framework with self-attention for interpretable knowledge tracing
title_sort embedding cognitive framework with self-attention for interpretable knowledge tracing
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9584970/
https://www.ncbi.nlm.nih.gov/pubmed/36266397
http://dx.doi.org/10.1038/s41598-022-22539-9
work_keys_str_mv AT puyanjun embeddingcognitiveframeworkwithselfattentionforinterpretableknowledgetracing
AT wuwenjun embeddingcognitiveframeworkwithselfattentionforinterpretableknowledgetracing
AT pengtianhao embeddingcognitiveframeworkwithselfattentionforinterpretableknowledgetracing
AT liufang embeddingcognitiveframeworkwithselfattentionforinterpretableknowledgetracing
AT liangyu embeddingcognitiveframeworkwithselfattentionforinterpretableknowledgetracing
AT yuxin embeddingcognitiveframeworkwithselfattentionforinterpretableknowledgetracing
AT chenruibo embeddingcognitiveframeworkwithselfattentionforinterpretableknowledgetracing
AT fengpu embeddingcognitiveframeworkwithselfattentionforinterpretableknowledgetracing