Cargando…

TNT: An Interpretable Tree-Network-Tree Learning Framework using Knowledge Distillation

Deep Neural Networks (DNNs) usually work in an end-to-end manner. This makes the trained DNNs easy to use, but they remain an ambiguous decision process for every test case. Unfortunately, the interpretability of decisions is crucial in some scenarios, such as medical or financial data mining and de...

Descripción completa

Detalles Bibliográficos
Autores principales: Li, Jiawei, Li, Yiming, Xiang, Xingchun, Xia, Shu-Tao, Dong, Siyi, Cai, Yun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7712003/
https://www.ncbi.nlm.nih.gov/pubmed/33286971
http://dx.doi.org/10.3390/e22111203
_version_ 1783618272449527808
author Li, Jiawei
Li, Yiming
Xiang, Xingchun
Xia, Shu-Tao
Dong, Siyi
Cai, Yun
author_facet Li, Jiawei
Li, Yiming
Xiang, Xingchun
Xia, Shu-Tao
Dong, Siyi
Cai, Yun
author_sort Li, Jiawei
collection PubMed
description Deep Neural Networks (DNNs) usually work in an end-to-end manner. This makes the trained DNNs easy to use, but they remain an ambiguous decision process for every test case. Unfortunately, the interpretability of decisions is crucial in some scenarios, such as medical or financial data mining and decision-making. In this paper, we propose a Tree-Network-Tree (TNT) learning framework for explainable decision-making, where the knowledge is alternately transferred between the tree model and DNNs. Specifically, the proposed TNT learning framework exerts the advantages of different models at different stages: (1) a novel James–Stein Decision Tree (JSDT) is proposed to generate better knowledge representations for DNNs, especially when the input data are in low-frequency or low-quality; (2) the DNNs output high-performing prediction result from the knowledge embedding inputs and behave as a teacher model for the following tree model; and (3) a novel distillable Gradient Boosted Decision Tree (dGBDT) is proposed to learn interpretable trees from the soft labels and make a comparable prediction as DNNs do. Extensive experiments on various machine learning tasks demonstrated the effectiveness of the proposed method.
format Online
Article
Text
id pubmed-7712003
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-77120032021-02-24 TNT: An Interpretable Tree-Network-Tree Learning Framework using Knowledge Distillation Li, Jiawei Li, Yiming Xiang, Xingchun Xia, Shu-Tao Dong, Siyi Cai, Yun Entropy (Basel) Article Deep Neural Networks (DNNs) usually work in an end-to-end manner. This makes the trained DNNs easy to use, but they remain an ambiguous decision process for every test case. Unfortunately, the interpretability of decisions is crucial in some scenarios, such as medical or financial data mining and decision-making. In this paper, we propose a Tree-Network-Tree (TNT) learning framework for explainable decision-making, where the knowledge is alternately transferred between the tree model and DNNs. Specifically, the proposed TNT learning framework exerts the advantages of different models at different stages: (1) a novel James–Stein Decision Tree (JSDT) is proposed to generate better knowledge representations for DNNs, especially when the input data are in low-frequency or low-quality; (2) the DNNs output high-performing prediction result from the knowledge embedding inputs and behave as a teacher model for the following tree model; and (3) a novel distillable Gradient Boosted Decision Tree (dGBDT) is proposed to learn interpretable trees from the soft labels and make a comparable prediction as DNNs do. Extensive experiments on various machine learning tasks demonstrated the effectiveness of the proposed method. MDPI 2020-10-24 /pmc/articles/PMC7712003/ /pubmed/33286971 http://dx.doi.org/10.3390/e22111203 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Li, Jiawei
Li, Yiming
Xiang, Xingchun
Xia, Shu-Tao
Dong, Siyi
Cai, Yun
TNT: An Interpretable Tree-Network-Tree Learning Framework using Knowledge Distillation
title TNT: An Interpretable Tree-Network-Tree Learning Framework using Knowledge Distillation
title_full TNT: An Interpretable Tree-Network-Tree Learning Framework using Knowledge Distillation
title_fullStr TNT: An Interpretable Tree-Network-Tree Learning Framework using Knowledge Distillation
title_full_unstemmed TNT: An Interpretable Tree-Network-Tree Learning Framework using Knowledge Distillation
title_short TNT: An Interpretable Tree-Network-Tree Learning Framework using Knowledge Distillation
title_sort tnt: an interpretable tree-network-tree learning framework using knowledge distillation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7712003/
https://www.ncbi.nlm.nih.gov/pubmed/33286971
http://dx.doi.org/10.3390/e22111203
work_keys_str_mv AT lijiawei tntaninterpretabletreenetworktreelearningframeworkusingknowledgedistillation
AT liyiming tntaninterpretabletreenetworktreelearningframeworkusingknowledgedistillation
AT xiangxingchun tntaninterpretabletreenetworktreelearningframeworkusingknowledgedistillation
AT xiashutao tntaninterpretabletreenetworktreelearningframeworkusingknowledgedistillation
AT dongsiyi tntaninterpretabletreenetworktreelearningframeworkusingknowledgedistillation
AT caiyun tntaninterpretabletreenetworktreelearningframeworkusingknowledgedistillation