Cargando…
No silver bullet: interpretable ML models must be explained
Recent years witnessed a number of proposals for the use of the so-called interpretable models in specific application domains. These include high-risk, but also safety-critical domains. In contrast, other works reported some pitfalls of machine learning model interpretability, in part justified by...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10165097/ https://www.ncbi.nlm.nih.gov/pubmed/37168320 http://dx.doi.org/10.3389/frai.2023.1128212 |
_version_ | 1785038196732592128 |
---|---|
author | Marques-Silva, Joao Ignatiev, Alexey |
author_facet | Marques-Silva, Joao Ignatiev, Alexey |
author_sort | Marques-Silva, Joao |
collection | PubMed |
description | Recent years witnessed a number of proposals for the use of the so-called interpretable models in specific application domains. These include high-risk, but also safety-critical domains. In contrast, other works reported some pitfalls of machine learning model interpretability, in part justified by the lack of a rigorous definition of what an interpretable model should represent. This study proposes to relate interpretability with the ability of a model to offer explanations of why a prediction is made given some point in feature space. Under this general goal of offering explanations to predictions, this study reveals additional limitations of interpretable models. Concretely, this study considers application domains where the purpose is to help human decision makers to understand why some prediction was made or why was not some other prediction made, and where irreducible (and so minimal) information is sought. In such domains, this study argues that answers to such why (or why not) questions can exhibit arbitrary redundancy, i.e., the answers can be simplified, as long as these answers are obtained by human inspection of the interpretable ML model representation. |
format | Online Article Text |
id | pubmed-10165097 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-101650972023-05-09 No silver bullet: interpretable ML models must be explained Marques-Silva, Joao Ignatiev, Alexey Front Artif Intell Artificial Intelligence Recent years witnessed a number of proposals for the use of the so-called interpretable models in specific application domains. These include high-risk, but also safety-critical domains. In contrast, other works reported some pitfalls of machine learning model interpretability, in part justified by the lack of a rigorous definition of what an interpretable model should represent. This study proposes to relate interpretability with the ability of a model to offer explanations of why a prediction is made given some point in feature space. Under this general goal of offering explanations to predictions, this study reveals additional limitations of interpretable models. Concretely, this study considers application domains where the purpose is to help human decision makers to understand why some prediction was made or why was not some other prediction made, and where irreducible (and so minimal) information is sought. In such domains, this study argues that answers to such why (or why not) questions can exhibit arbitrary redundancy, i.e., the answers can be simplified, as long as these answers are obtained by human inspection of the interpretable ML model representation. Frontiers Media S.A. 2023-04-24 /pmc/articles/PMC10165097/ /pubmed/37168320 http://dx.doi.org/10.3389/frai.2023.1128212 Text en Copyright © 2023 Marques-Silva and Ignatiev. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Artificial Intelligence Marques-Silva, Joao Ignatiev, Alexey No silver bullet: interpretable ML models must be explained |
title | No silver bullet: interpretable ML models must be explained |
title_full | No silver bullet: interpretable ML models must be explained |
title_fullStr | No silver bullet: interpretable ML models must be explained |
title_full_unstemmed | No silver bullet: interpretable ML models must be explained |
title_short | No silver bullet: interpretable ML models must be explained |
title_sort | no silver bullet: interpretable ml models must be explained |
topic | Artificial Intelligence |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10165097/ https://www.ncbi.nlm.nih.gov/pubmed/37168320 http://dx.doi.org/10.3389/frai.2023.1128212 |
work_keys_str_mv | AT marquessilvajoao nosilverbulletinterpretablemlmodelsmustbeexplained AT ignatievalexey nosilverbulletinterpretablemlmodelsmustbeexplained |