Cargando…
Principles and Practice of Explainable Machine Learning
Artificial intelligence (AI) provides many opportunities to improve private and public life. Discovering patterns and structures in large troves of data in an automated manner is a core component of data science, and currently drives applications in diverse areas such as computational biology, law a...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8281957/ https://www.ncbi.nlm.nih.gov/pubmed/34278297 http://dx.doi.org/10.3389/fdata.2021.688969 |
_version_ | 1783722917138268160 |
---|---|
author | Belle, Vaishak Papantonis, Ioannis |
author_facet | Belle, Vaishak Papantonis, Ioannis |
author_sort | Belle, Vaishak |
collection | PubMed |
description | Artificial intelligence (AI) provides many opportunities to improve private and public life. Discovering patterns and structures in large troves of data in an automated manner is a core component of data science, and currently drives applications in diverse areas such as computational biology, law and finance. However, such a highly positive impact is coupled with a significant challenge: how do we understand the decisions suggested by these systems in order that we can trust them? In this report, we focus specifically on data-driven methods—machine learning (ML) and pattern recognition models in particular—so as to survey and distill the results and observations from the literature. The purpose of this report can be especially appreciated by noting that ML models are increasingly deployed in a wide range of businesses. However, with the increasing prevalence and complexity of methods, business stakeholders in the very least have a growing number of concerns about the drawbacks of models, data-specific biases, and so on. Analogously, data science practitioners are often not aware about approaches emerging from the academic literature or may struggle to appreciate the differences between different methods, so end up using industry standards such as SHAP. Here, we have undertaken a survey to help industry practitioners (but also data scientists more broadly) understand the field of explainable machine learning better and apply the right tools. Our latter sections build a narrative around a putative data scientist, and discuss how she might go about explaining her models by asking the right questions. From an organization viewpoint, after motivating the area broadly, we discuss the main developments, including the principles that allow us to study transparent models vs. opaque models, as well as model-specific or model-agnostic post-hoc explainability approaches. We also briefly reflect on deep learning models, and conclude with a discussion about future research directions. |
format | Online Article Text |
id | pubmed-8281957 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-82819572021-07-16 Principles and Practice of Explainable Machine Learning Belle, Vaishak Papantonis, Ioannis Front Big Data Big Data Artificial intelligence (AI) provides many opportunities to improve private and public life. Discovering patterns and structures in large troves of data in an automated manner is a core component of data science, and currently drives applications in diverse areas such as computational biology, law and finance. However, such a highly positive impact is coupled with a significant challenge: how do we understand the decisions suggested by these systems in order that we can trust them? In this report, we focus specifically on data-driven methods—machine learning (ML) and pattern recognition models in particular—so as to survey and distill the results and observations from the literature. The purpose of this report can be especially appreciated by noting that ML models are increasingly deployed in a wide range of businesses. However, with the increasing prevalence and complexity of methods, business stakeholders in the very least have a growing number of concerns about the drawbacks of models, data-specific biases, and so on. Analogously, data science practitioners are often not aware about approaches emerging from the academic literature or may struggle to appreciate the differences between different methods, so end up using industry standards such as SHAP. Here, we have undertaken a survey to help industry practitioners (but also data scientists more broadly) understand the field of explainable machine learning better and apply the right tools. Our latter sections build a narrative around a putative data scientist, and discuss how she might go about explaining her models by asking the right questions. From an organization viewpoint, after motivating the area broadly, we discuss the main developments, including the principles that allow us to study transparent models vs. opaque models, as well as model-specific or model-agnostic post-hoc explainability approaches. We also briefly reflect on deep learning models, and conclude with a discussion about future research directions. Frontiers Media S.A. 2021-07-01 /pmc/articles/PMC8281957/ /pubmed/34278297 http://dx.doi.org/10.3389/fdata.2021.688969 Text en Copyright © 2021 Belle and Papantonis. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Big Data Belle, Vaishak Papantonis, Ioannis Principles and Practice of Explainable Machine Learning |
title | Principles and Practice of Explainable Machine Learning |
title_full | Principles and Practice of Explainable Machine Learning |
title_fullStr | Principles and Practice of Explainable Machine Learning |
title_full_unstemmed | Principles and Practice of Explainable Machine Learning |
title_short | Principles and Practice of Explainable Machine Learning |
title_sort | principles and practice of explainable machine learning |
topic | Big Data |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8281957/ https://www.ncbi.nlm.nih.gov/pubmed/34278297 http://dx.doi.org/10.3389/fdata.2021.688969 |
work_keys_str_mv | AT bellevaishak principlesandpracticeofexplainablemachinelearning AT papantonisioannis principlesandpracticeofexplainablemachinelearning |