Cargando…

TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data

With the rise in the employment of deep learning methods in safety-critical scenarios, interpretability is more essential than ever before. Although many different directions regarding interpretability have been explored for visual modalities, time series data has been neglected, with only a handful...

Descripción completa

Detalles Bibliográficos
Autores principales: Siddiqui, Shoaib Ahmed, Mercier, Dominique, Dengel, Andreas, Ahmed, Sheraz
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8587116/
https://www.ncbi.nlm.nih.gov/pubmed/34770678
http://dx.doi.org/10.3390/s21217373
_version_ 1784598038499557376
author Siddiqui, Shoaib Ahmed
Mercier, Dominique
Dengel, Andreas
Ahmed, Sheraz
author_facet Siddiqui, Shoaib Ahmed
Mercier, Dominique
Dengel, Andreas
Ahmed, Sheraz
author_sort Siddiqui, Shoaib Ahmed
collection PubMed
description With the rise in the employment of deep learning methods in safety-critical scenarios, interpretability is more essential than ever before. Although many different directions regarding interpretability have been explored for visual modalities, time series data has been neglected, with only a handful of methods tested due to their poor intelligibility. We approach the problem of interpretability in a novel way by proposing TSInsight, where we attach an auto-encoder to the classifier with a sparsity-inducing norm on its output and fine-tune it based on the gradients from the classifier and a reconstruction penalty. TSInsight learns to preserve features that are important for prediction by the classifier and suppresses those that are irrelevant, i.e., serves as a feature attribution method to boost the interpretability. In contrast to most other attribution frameworks, TSInsight is capable of generating both instance-based and model-based explanations. We evaluated TSInsight along with nine other commonly used attribution methods on eight different time series datasets to validate its efficacy. The evaluation results show that TSInsight naturally achieves output space contraction; therefore, it is an effective tool for the interpretability of deep time series models.
format Online
Article
Text
id pubmed-8587116
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-85871162021-11-13 TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data Siddiqui, Shoaib Ahmed Mercier, Dominique Dengel, Andreas Ahmed, Sheraz Sensors (Basel) Article With the rise in the employment of deep learning methods in safety-critical scenarios, interpretability is more essential than ever before. Although many different directions regarding interpretability have been explored for visual modalities, time series data has been neglected, with only a handful of methods tested due to their poor intelligibility. We approach the problem of interpretability in a novel way by proposing TSInsight, where we attach an auto-encoder to the classifier with a sparsity-inducing norm on its output and fine-tune it based on the gradients from the classifier and a reconstruction penalty. TSInsight learns to preserve features that are important for prediction by the classifier and suppresses those that are irrelevant, i.e., serves as a feature attribution method to boost the interpretability. In contrast to most other attribution frameworks, TSInsight is capable of generating both instance-based and model-based explanations. We evaluated TSInsight along with nine other commonly used attribution methods on eight different time series datasets to validate its efficacy. The evaluation results show that TSInsight naturally achieves output space contraction; therefore, it is an effective tool for the interpretability of deep time series models. MDPI 2021-11-05 /pmc/articles/PMC8587116/ /pubmed/34770678 http://dx.doi.org/10.3390/s21217373 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Siddiqui, Shoaib Ahmed
Mercier, Dominique
Dengel, Andreas
Ahmed, Sheraz
TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data
title TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data
title_full TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data
title_fullStr TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data
title_full_unstemmed TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data
title_short TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data
title_sort tsinsight: a local-global attribution framework for interpretability in time series data
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8587116/
https://www.ncbi.nlm.nih.gov/pubmed/34770678
http://dx.doi.org/10.3390/s21217373
work_keys_str_mv AT siddiquishoaibahmed tsinsightalocalglobalattributionframeworkforinterpretabilityintimeseriesdata
AT mercierdominique tsinsightalocalglobalattributionframeworkforinterpretabilityintimeseriesdata
AT dengelandreas tsinsightalocalglobalattributionframeworkforinterpretabilityintimeseriesdata
AT ahmedsheraz tsinsightalocalglobalattributionframeworkforinterpretabilityintimeseriesdata