Cargando…
A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods
Understanding the inferences of data-driven, machine-learned models can be seen as a process that discloses the relationships between their input and output. These relationships consist and can be represented as a set of inference rules. However, the models usually do not explicit these rules to the...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8596373/ https://www.ncbi.nlm.nih.gov/pubmed/34805973 http://dx.doi.org/10.3389/frai.2021.717899 |
_version_ | 1784600349253828608 |
---|---|
author | Vilone, Giulia Longo, Luca |
author_facet | Vilone, Giulia Longo, Luca |
author_sort | Vilone, Giulia |
collection | PubMed |
description | Understanding the inferences of data-driven, machine-learned models can be seen as a process that discloses the relationships between their input and output. These relationships consist and can be represented as a set of inference rules. However, the models usually do not explicit these rules to their end-users who, subsequently, perceive them as black-boxes and might not trust their predictions. Therefore, scholars have proposed several methods for extracting rules from data-driven machine-learned models to explain their logic. However, limited work exists on the evaluation and comparison of these methods. This study proposes a novel comparative approach to evaluate and compare the rulesets produced by five model-agnostic, post-hoc rule extractors by employing eight quantitative metrics. Eventually, the Friedman test was employed to check whether a method consistently performed better than the others, in terms of the selected metrics, and could be considered superior. Findings demonstrate that these metrics do not provide sufficient evidence to identify superior methods over the others. However, when used together, these metrics form a tool, applicable to every rule-extraction method and machine-learned models, that is, suitable to highlight the strengths and weaknesses of the rule-extractors in various applications in an objective and straightforward manner, without any human interventions. Thus, they are capable of successfully modelling distinctively aspects of explainability, providing to researchers and practitioners vital insights on what a model has learned during its training process and how it makes its predictions. |
format | Online Article Text |
id | pubmed-8596373 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-85963732021-11-18 A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods Vilone, Giulia Longo, Luca Front Artif Intell Artificial Intelligence Understanding the inferences of data-driven, machine-learned models can be seen as a process that discloses the relationships between their input and output. These relationships consist and can be represented as a set of inference rules. However, the models usually do not explicit these rules to their end-users who, subsequently, perceive them as black-boxes and might not trust their predictions. Therefore, scholars have proposed several methods for extracting rules from data-driven machine-learned models to explain their logic. However, limited work exists on the evaluation and comparison of these methods. This study proposes a novel comparative approach to evaluate and compare the rulesets produced by five model-agnostic, post-hoc rule extractors by employing eight quantitative metrics. Eventually, the Friedman test was employed to check whether a method consistently performed better than the others, in terms of the selected metrics, and could be considered superior. Findings demonstrate that these metrics do not provide sufficient evidence to identify superior methods over the others. However, when used together, these metrics form a tool, applicable to every rule-extraction method and machine-learned models, that is, suitable to highlight the strengths and weaknesses of the rule-extractors in various applications in an objective and straightforward manner, without any human interventions. Thus, they are capable of successfully modelling distinctively aspects of explainability, providing to researchers and practitioners vital insights on what a model has learned during its training process and how it makes its predictions. Frontiers Media S.A. 2021-11-03 /pmc/articles/PMC8596373/ /pubmed/34805973 http://dx.doi.org/10.3389/frai.2021.717899 Text en Copyright © 2021 Vilone and Longo. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Artificial Intelligence Vilone, Giulia Longo, Luca A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods |
title | A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods |
title_full | A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods |
title_fullStr | A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods |
title_full_unstemmed | A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods |
title_short | A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods |
title_sort | quantitative evaluation of global, rule-based explanations of post-hoc, model agnostic methods |
topic | Artificial Intelligence |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8596373/ https://www.ncbi.nlm.nih.gov/pubmed/34805973 http://dx.doi.org/10.3389/frai.2021.717899 |
work_keys_str_mv | AT vilonegiulia aquantitativeevaluationofglobalrulebasedexplanationsofposthocmodelagnosticmethods AT longoluca aquantitativeevaluationofglobalrulebasedexplanationsofposthocmodelagnosticmethods AT vilonegiulia quantitativeevaluationofglobalrulebasedexplanationsofposthocmodelagnosticmethods AT longoluca quantitativeevaluationofglobalrulebasedexplanationsofposthocmodelagnosticmethods |