Cargando…

Rationalization for explainable NLP: a survey

Recent advances in deep learning have improved the performance of many Natural Language Processing (NLP) tasks such as translation, question-answering, and text classification. However, this improvement comes at the expense of model explainability. Black-box models make it difficult to understand th...

Descripción completa

Detalles Bibliográficos
Autores principales: Gurrapu, Sai, Kulkarni, Ajay, Huang, Lifu, Lourentzou, Ismini, Batarseh, Feras A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10560994/
https://www.ncbi.nlm.nih.gov/pubmed/37818431
http://dx.doi.org/10.3389/frai.2023.1225093
_version_ 1785117828061331456
author Gurrapu, Sai
Kulkarni, Ajay
Huang, Lifu
Lourentzou, Ismini
Batarseh, Feras A.
author_facet Gurrapu, Sai
Kulkarni, Ajay
Huang, Lifu
Lourentzou, Ismini
Batarseh, Feras A.
author_sort Gurrapu, Sai
collection PubMed
description Recent advances in deep learning have improved the performance of many Natural Language Processing (NLP) tasks such as translation, question-answering, and text classification. However, this improvement comes at the expense of model explainability. Black-box models make it difficult to understand the internals of a system and the process it takes to arrive at an output. Numerical (LIME, Shapley) and visualization (saliency heatmap) explainability techniques are helpful; however, they are insufficient because they require specialized knowledge. These factors led rationalization to emerge as a more accessible explainable technique in NLP. Rationalization justifies a model's output by providing a natural language explanation (rationale). Recent improvements in natural language generation have made rationalization an attractive technique because it is intuitive, human-comprehensible, and accessible to non-technical users. Since rationalization is a relatively new field, it is disorganized. As the first survey, rationalization literature in NLP from 2007 to 2022 is analyzed. This survey presents available methods, explainable evaluations, code, and datasets used across various NLP tasks that use rationalization. Further, a new subfield in Explainable AI (XAI), namely, Rational AI (RAI), is introduced to advance the current state of rationalization. A discussion on observed insights, challenges, and future directions is provided to point to promising research opportunities.
format Online
Article
Text
id pubmed-10560994
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-105609942023-10-10 Rationalization for explainable NLP: a survey Gurrapu, Sai Kulkarni, Ajay Huang, Lifu Lourentzou, Ismini Batarseh, Feras A. Front Artif Intell Artificial Intelligence Recent advances in deep learning have improved the performance of many Natural Language Processing (NLP) tasks such as translation, question-answering, and text classification. However, this improvement comes at the expense of model explainability. Black-box models make it difficult to understand the internals of a system and the process it takes to arrive at an output. Numerical (LIME, Shapley) and visualization (saliency heatmap) explainability techniques are helpful; however, they are insufficient because they require specialized knowledge. These factors led rationalization to emerge as a more accessible explainable technique in NLP. Rationalization justifies a model's output by providing a natural language explanation (rationale). Recent improvements in natural language generation have made rationalization an attractive technique because it is intuitive, human-comprehensible, and accessible to non-technical users. Since rationalization is a relatively new field, it is disorganized. As the first survey, rationalization literature in NLP from 2007 to 2022 is analyzed. This survey presents available methods, explainable evaluations, code, and datasets used across various NLP tasks that use rationalization. Further, a new subfield in Explainable AI (XAI), namely, Rational AI (RAI), is introduced to advance the current state of rationalization. A discussion on observed insights, challenges, and future directions is provided to point to promising research opportunities. Frontiers Media S.A. 2023-09-25 /pmc/articles/PMC10560994/ /pubmed/37818431 http://dx.doi.org/10.3389/frai.2023.1225093 Text en Copyright © 2023 Gurrapu, Kulkarni, Huang, Lourentzou and Batarseh. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Gurrapu, Sai
Kulkarni, Ajay
Huang, Lifu
Lourentzou, Ismini
Batarseh, Feras A.
Rationalization for explainable NLP: a survey
title Rationalization for explainable NLP: a survey
title_full Rationalization for explainable NLP: a survey
title_fullStr Rationalization for explainable NLP: a survey
title_full_unstemmed Rationalization for explainable NLP: a survey
title_short Rationalization for explainable NLP: a survey
title_sort rationalization for explainable nlp: a survey
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10560994/
https://www.ncbi.nlm.nih.gov/pubmed/37818431
http://dx.doi.org/10.3389/frai.2023.1225093
work_keys_str_mv AT gurrapusai rationalizationforexplainablenlpasurvey
AT kulkarniajay rationalizationforexplainablenlpasurvey
AT huanglifu rationalizationforexplainablenlpasurvey
AT lourentzouismini rationalizationforexplainablenlpasurvey
AT batarsehferasa rationalizationforexplainablenlpasurvey