Cargando…
Explainable AI via learning to optimize
Indecipherable black boxes are common in machine learning (ML), but applications increasingly require explainable artificial intelligence (XAI). The core of XAI is to establish transparent and interpretable data-driven algorithms. This work provides concrete tools for XAI in situations where prior k...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10284861/ https://www.ncbi.nlm.nih.gov/pubmed/37344533 http://dx.doi.org/10.1038/s41598-023-36249-3 |
_version_ | 1785061484972212224 |
---|---|
author | Heaton, Howard Fung, Samy Wu |
author_facet | Heaton, Howard Fung, Samy Wu |
author_sort | Heaton, Howard |
collection | PubMed |
description | Indecipherable black boxes are common in machine learning (ML), but applications increasingly require explainable artificial intelligence (XAI). The core of XAI is to establish transparent and interpretable data-driven algorithms. This work provides concrete tools for XAI in situations where prior knowledge must be encoded and untrustworthy inferences flagged. We use the “learn to optimize” (L2O) methodology wherein each inference solves a data-driven optimization problem. Our L2O models are straightforward to implement, directly encode prior knowledge, and yield theoretical guarantees (e.g. satisfaction of constraints). We also propose use of interpretable certificates to verify whether model inferences are trustworthy. Numerical examples are provided in the applications of dictionary-based signal recovery, CT imaging, and arbitrage trading of cryptoassets. Code and additional documentation can be found at https://xai-l2o.research.typal.academy. |
format | Online Article Text |
id | pubmed-10284861 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-102848612023-06-23 Explainable AI via learning to optimize Heaton, Howard Fung, Samy Wu Sci Rep Article Indecipherable black boxes are common in machine learning (ML), but applications increasingly require explainable artificial intelligence (XAI). The core of XAI is to establish transparent and interpretable data-driven algorithms. This work provides concrete tools for XAI in situations where prior knowledge must be encoded and untrustworthy inferences flagged. We use the “learn to optimize” (L2O) methodology wherein each inference solves a data-driven optimization problem. Our L2O models are straightforward to implement, directly encode prior knowledge, and yield theoretical guarantees (e.g. satisfaction of constraints). We also propose use of interpretable certificates to verify whether model inferences are trustworthy. Numerical examples are provided in the applications of dictionary-based signal recovery, CT imaging, and arbitrage trading of cryptoassets. Code and additional documentation can be found at https://xai-l2o.research.typal.academy. Nature Publishing Group UK 2023-06-21 /pmc/articles/PMC10284861/ /pubmed/37344533 http://dx.doi.org/10.1038/s41598-023-36249-3 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Heaton, Howard Fung, Samy Wu Explainable AI via learning to optimize |
title | Explainable AI via learning to optimize |
title_full | Explainable AI via learning to optimize |
title_fullStr | Explainable AI via learning to optimize |
title_full_unstemmed | Explainable AI via learning to optimize |
title_short | Explainable AI via learning to optimize |
title_sort | explainable ai via learning to optimize |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10284861/ https://www.ncbi.nlm.nih.gov/pubmed/37344533 http://dx.doi.org/10.1038/s41598-023-36249-3 |
work_keys_str_mv | AT heatonhoward explainableaivialearningtooptimize AT fungsamywu explainableaivialearningtooptimize |