Cargando…
What is Interpretability?
We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: (1) Are networks explainable, and i...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Netherlands
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8654716/ https://www.ncbi.nlm.nih.gov/pubmed/34966640 http://dx.doi.org/10.1007/s13347-020-00435-2 |
_version_ | 1784611921591271424 |
---|---|
author | Erasmus, Adrian Brunet, Tyler D. P. Fisher, Eyal |
author_facet | Erasmus, Adrian Brunet, Tyler D. P. Fisher, Eyal |
author_sort | Erasmus, Adrian |
collection | PubMed |
description | We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: (1) Are networks explainable, and if so, what does it mean to explain the output of a network? And (2) what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In response to (1), we show how four familiar accounts of explanation apply to neural networks as they would to any scientific phenomenon. We diagnose the confusion about explaining neural networks within the machine learning literature as an equivocation on “explainability,” “understandability” and “interpretability.” To remedy this, we distinguish between these notions, and answer (2) by offering a theory and typology of interpretation in machine learning. Interpretation is something one does to an explanation with the aim of producing another, more understandable, explanation. As with explanation, there are various concepts and methods involved in interpretation: Total or Partial, Global or Local, and Approximative or Isomorphic. Our account of “interpretability” is consistent with uses in the machine learning literature, in keeping with the philosophy of explanation and understanding, and pays special attention to medical artificial intelligence systems. |
format | Online Article Text |
id | pubmed-8654716 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Springer Netherlands |
record_format | MEDLINE/PubMed |
spelling | pubmed-86547162021-12-27 What is Interpretability? Erasmus, Adrian Brunet, Tyler D. P. Fisher, Eyal Philos Technol Research Article We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: (1) Are networks explainable, and if so, what does it mean to explain the output of a network? And (2) what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In response to (1), we show how four familiar accounts of explanation apply to neural networks as they would to any scientific phenomenon. We diagnose the confusion about explaining neural networks within the machine learning literature as an equivocation on “explainability,” “understandability” and “interpretability.” To remedy this, we distinguish between these notions, and answer (2) by offering a theory and typology of interpretation in machine learning. Interpretation is something one does to an explanation with the aim of producing another, more understandable, explanation. As with explanation, there are various concepts and methods involved in interpretation: Total or Partial, Global or Local, and Approximative or Isomorphic. Our account of “interpretability” is consistent with uses in the machine learning literature, in keeping with the philosophy of explanation and understanding, and pays special attention to medical artificial intelligence systems. Springer Netherlands 2020-11-12 2021 /pmc/articles/PMC8654716/ /pubmed/34966640 http://dx.doi.org/10.1007/s13347-020-00435-2 Text en © The Author(s) 2020 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Research Article Erasmus, Adrian Brunet, Tyler D. P. Fisher, Eyal What is Interpretability? |
title | What is Interpretability? |
title_full | What is Interpretability? |
title_fullStr | What is Interpretability? |
title_full_unstemmed | What is Interpretability? |
title_short | What is Interpretability? |
title_sort | what is interpretability? |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8654716/ https://www.ncbi.nlm.nih.gov/pubmed/34966640 http://dx.doi.org/10.1007/s13347-020-00435-2 |
work_keys_str_mv | AT erasmusadrian whatisinterpretability AT brunettylerdp whatisinterpretability AT fishereyal whatisinterpretability |