Cargando…

"What is relevant in a text document?": An interpretable machine learning approach

Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be proc...

Descripción completa

Detalles Bibliográficos
Autores principales: Arras, Leila, Horn, Franziska, Montavon, Grégoire, Müller, Klaus-Robert, Samek, Wojciech
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5553725/
https://www.ncbi.nlm.nih.gov/pubmed/28800619
http://dx.doi.org/10.1371/journal.pone.0181142
_version_ 1783256661660532736
author Arras, Leila
Horn, Franziska
Montavon, Grégoire
Müller, Klaus-Robert
Samek, Wojciech
author_facet Arras, Leila
Horn, Franziska
Montavon, Grégoire
Müller, Klaus-Robert
Samek, Wojciech
author_sort Arras, Leila
collection PubMed
description Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text’s category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications.
format Online
Article
Text
id pubmed-5553725
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-55537252017-08-25 "What is relevant in a text document?": An interpretable machine learning approach Arras, Leila Horn, Franziska Montavon, Grégoire Müller, Klaus-Robert Samek, Wojciech PLoS One Research Article Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text’s category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications. Public Library of Science 2017-08-11 /pmc/articles/PMC5553725/ /pubmed/28800619 http://dx.doi.org/10.1371/journal.pone.0181142 Text en © 2017 Arras et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Arras, Leila
Horn, Franziska
Montavon, Grégoire
Müller, Klaus-Robert
Samek, Wojciech
"What is relevant in a text document?": An interpretable machine learning approach
title "What is relevant in a text document?": An interpretable machine learning approach
title_full "What is relevant in a text document?": An interpretable machine learning approach
title_fullStr "What is relevant in a text document?": An interpretable machine learning approach
title_full_unstemmed "What is relevant in a text document?": An interpretable machine learning approach
title_short "What is relevant in a text document?": An interpretable machine learning approach
title_sort "what is relevant in a text document?": an interpretable machine learning approach
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5553725/
https://www.ncbi.nlm.nih.gov/pubmed/28800619
http://dx.doi.org/10.1371/journal.pone.0181142
work_keys_str_mv AT arrasleila whatisrelevantinatextdocumentaninterpretablemachinelearningapproach
AT hornfranziska whatisrelevantinatextdocumentaninterpretablemachinelearningapproach
AT montavongregoire whatisrelevantinatextdocumentaninterpretablemachinelearningapproach
AT mullerklausrobert whatisrelevantinatextdocumentaninterpretablemachinelearningapproach
AT samekwojciech whatisrelevantinatextdocumentaninterpretablemachinelearningapproach