Cargando…

Explaining machine-learning models for gamma-ray detection and identification

As more complex predictive models are used for gamma-ray spectral analysis, methods are needed to probe and understand their predictions and behavior. Recent work has begun to bring the latest techniques from the field of Explainable Artificial Intelligence (XAI) into the applications of gamma-ray s...

Descripción completa

Detalles Bibliográficos
Autores principales: Bandstra, Mark S., Curtis, Joseph C., Ghawaly, James M., Jones, A. Chandler, Joshi, Tenzing H. Y.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10281578/
https://www.ncbi.nlm.nih.gov/pubmed/37339151
http://dx.doi.org/10.1371/journal.pone.0286829
_version_ 1785061028875206656
author Bandstra, Mark S.
Curtis, Joseph C.
Ghawaly, James M.
Jones, A. Chandler
Joshi, Tenzing H. Y.
author_facet Bandstra, Mark S.
Curtis, Joseph C.
Ghawaly, James M.
Jones, A. Chandler
Joshi, Tenzing H. Y.
author_sort Bandstra, Mark S.
collection PubMed
description As more complex predictive models are used for gamma-ray spectral analysis, methods are needed to probe and understand their predictions and behavior. Recent work has begun to bring the latest techniques from the field of Explainable Artificial Intelligence (XAI) into the applications of gamma-ray spectroscopy, including the introduction of gradient-based methods like saliency mapping and Gradient-weighted Class Activation Mapping (Grad-CAM), and black box methods like Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). In addition, new sources of synthetic radiological data are becoming available, and these new data sets present opportunities to train models using more data than ever before. In this work, we use a neural network model trained on synthetic NaI(Tl) urban search data to compare some of these explanation methods and identify modifications that need to be applied to adapt the methods to gamma-ray spectral data. We find that the black box methods LIME and SHAP are especially accurate in their results, and recommend SHAP since it requires little hyperparameter tuning. We also propose and demonstrate a technique for generating counterfactual explanations using orthogonal projections of LIME and SHAP explanations.
format Online
Article
Text
id pubmed-10281578
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-102815782023-06-21 Explaining machine-learning models for gamma-ray detection and identification Bandstra, Mark S. Curtis, Joseph C. Ghawaly, James M. Jones, A. Chandler Joshi, Tenzing H. Y. PLoS One Research Article As more complex predictive models are used for gamma-ray spectral analysis, methods are needed to probe and understand their predictions and behavior. Recent work has begun to bring the latest techniques from the field of Explainable Artificial Intelligence (XAI) into the applications of gamma-ray spectroscopy, including the introduction of gradient-based methods like saliency mapping and Gradient-weighted Class Activation Mapping (Grad-CAM), and black box methods like Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). In addition, new sources of synthetic radiological data are becoming available, and these new data sets present opportunities to train models using more data than ever before. In this work, we use a neural network model trained on synthetic NaI(Tl) urban search data to compare some of these explanation methods and identify modifications that need to be applied to adapt the methods to gamma-ray spectral data. We find that the black box methods LIME and SHAP are especially accurate in their results, and recommend SHAP since it requires little hyperparameter tuning. We also propose and demonstrate a technique for generating counterfactual explanations using orthogonal projections of LIME and SHAP explanations. Public Library of Science 2023-06-20 /pmc/articles/PMC10281578/ /pubmed/37339151 http://dx.doi.org/10.1371/journal.pone.0286829 Text en © 2023 Bandstra et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Bandstra, Mark S.
Curtis, Joseph C.
Ghawaly, James M.
Jones, A. Chandler
Joshi, Tenzing H. Y.
Explaining machine-learning models for gamma-ray detection and identification
title Explaining machine-learning models for gamma-ray detection and identification
title_full Explaining machine-learning models for gamma-ray detection and identification
title_fullStr Explaining machine-learning models for gamma-ray detection and identification
title_full_unstemmed Explaining machine-learning models for gamma-ray detection and identification
title_short Explaining machine-learning models for gamma-ray detection and identification
title_sort explaining machine-learning models for gamma-ray detection and identification
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10281578/
https://www.ncbi.nlm.nih.gov/pubmed/37339151
http://dx.doi.org/10.1371/journal.pone.0286829
work_keys_str_mv AT bandstramarks explainingmachinelearningmodelsforgammaraydetectionandidentification
AT curtisjosephc explainingmachinelearningmodelsforgammaraydetectionandidentification
AT ghawalyjamesm explainingmachinelearningmodelsforgammaraydetectionandidentification
AT jonesachandler explainingmachinelearningmodelsforgammaraydetectionandidentification
AT joshitenzinghy explainingmachinelearningmodelsforgammaraydetectionandidentification