Cargando…
Towards Interpretable Deep Learning Models for Knowledge Tracing
Driven by the fast advancements of deep learning techniques, deep neural network has been recently adopted to design knowledge tracing (KT) models for achieving better prediction performance. However, the lack of interpretability of these models has painfully impeded their practical applications, as...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7334712/ http://dx.doi.org/10.1007/978-3-030-52240-7_34 |
_version_ | 1783553986792521728 |
---|---|
author | Lu, Yu Wang, Deliang Meng, Qinggang Chen, Penghe |
author_facet | Lu, Yu Wang, Deliang Meng, Qinggang Chen, Penghe |
author_sort | Lu, Yu |
collection | PubMed |
description | Driven by the fast advancements of deep learning techniques, deep neural network has been recently adopted to design knowledge tracing (KT) models for achieving better prediction performance. However, the lack of interpretability of these models has painfully impeded their practical applications, as their outputs and working mechanisms suffer from the intransparent decision process and complex inner structures. We thus propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models. Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model by backpropagating the relevance from the model’s output layer to its input layer. The experiment results show the feasibility using the LRP method for interpreting the DLKT model’s predictions, and partially validate the computed relevance scores. We believe it can be a solid step towards fully interpreting the DLKT models and promote their practical applications. |
format | Online Article Text |
id | pubmed-7334712 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
record_format | MEDLINE/PubMed |
spelling | pubmed-73347122020-07-06 Towards Interpretable Deep Learning Models for Knowledge Tracing Lu, Yu Wang, Deliang Meng, Qinggang Chen, Penghe Artificial Intelligence in Education Article Driven by the fast advancements of deep learning techniques, deep neural network has been recently adopted to design knowledge tracing (KT) models for achieving better prediction performance. However, the lack of interpretability of these models has painfully impeded their practical applications, as their outputs and working mechanisms suffer from the intransparent decision process and complex inner structures. We thus propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models. Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model by backpropagating the relevance from the model’s output layer to its input layer. The experiment results show the feasibility using the LRP method for interpreting the DLKT model’s predictions, and partially validate the computed relevance scores. We believe it can be a solid step towards fully interpreting the DLKT models and promote their practical applications. 2020-06-10 /pmc/articles/PMC7334712/ http://dx.doi.org/10.1007/978-3-030-52240-7_34 Text en © Springer Nature Switzerland AG 2020 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Article Lu, Yu Wang, Deliang Meng, Qinggang Chen, Penghe Towards Interpretable Deep Learning Models for Knowledge Tracing |
title | Towards Interpretable Deep Learning Models for Knowledge Tracing |
title_full | Towards Interpretable Deep Learning Models for Knowledge Tracing |
title_fullStr | Towards Interpretable Deep Learning Models for Knowledge Tracing |
title_full_unstemmed | Towards Interpretable Deep Learning Models for Knowledge Tracing |
title_short | Towards Interpretable Deep Learning Models for Knowledge Tracing |
title_sort | towards interpretable deep learning models for knowledge tracing |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7334712/ http://dx.doi.org/10.1007/978-3-030-52240-7_34 |
work_keys_str_mv | AT luyu towardsinterpretabledeeplearningmodelsforknowledgetracing AT wangdeliang towardsinterpretabledeeplearningmodelsforknowledgetracing AT mengqinggang towardsinterpretabledeeplearningmodelsforknowledgetracing AT chenpenghe towardsinterpretabledeeplearningmodelsforknowledgetracing |