Cargando…
Evaluating explainability for graph neural networks
As explanations are increasingly used to understand the behavior of graph neural networks (GNNs), evaluating the quality and reliability of GNN explanations is crucial. However, assessing the quality of GNN explanations is challenging as existing graph datasets have no or unreliable ground-truth exp...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10024712/ https://www.ncbi.nlm.nih.gov/pubmed/36934095 http://dx.doi.org/10.1038/s41597-023-01974-x |
_version_ | 1784909167779119104 |
---|---|
author | Agarwal, Chirag Queen, Owen Lakkaraju, Himabindu Zitnik, Marinka |
author_facet | Agarwal, Chirag Queen, Owen Lakkaraju, Himabindu Zitnik, Marinka |
author_sort | Agarwal, Chirag |
collection | PubMed |
description | As explanations are increasingly used to understand the behavior of graph neural networks (GNNs), evaluating the quality and reliability of GNN explanations is crucial. However, assessing the quality of GNN explanations is challenging as existing graph datasets have no or unreliable ground-truth explanations. Here, we introduce a synthetic graph data generator, ShapeGGen, which can generate a variety of benchmark datasets (e.g., varying graph sizes, degree distributions, homophilic vs. heterophilic graphs) accompanied by ground-truth explanations. The flexibility to generate diverse synthetic datasets and corresponding ground-truth explanations allows ShapeGGen to mimic the data in various real-world areas. We include ShapeGGen and several real-world graph datasets in a graph explainability library, GraphXAI. In addition to synthetic and real-world graph datasets with ground-truth explanations, GraphXAI provides data loaders, data processing functions, visualizers, GNN model implementations, and evaluation metrics to benchmark GNN explainability methods. |
format | Online Article Text |
id | pubmed-10024712 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-100247122023-03-20 Evaluating explainability for graph neural networks Agarwal, Chirag Queen, Owen Lakkaraju, Himabindu Zitnik, Marinka Sci Data Article As explanations are increasingly used to understand the behavior of graph neural networks (GNNs), evaluating the quality and reliability of GNN explanations is crucial. However, assessing the quality of GNN explanations is challenging as existing graph datasets have no or unreliable ground-truth explanations. Here, we introduce a synthetic graph data generator, ShapeGGen, which can generate a variety of benchmark datasets (e.g., varying graph sizes, degree distributions, homophilic vs. heterophilic graphs) accompanied by ground-truth explanations. The flexibility to generate diverse synthetic datasets and corresponding ground-truth explanations allows ShapeGGen to mimic the data in various real-world areas. We include ShapeGGen and several real-world graph datasets in a graph explainability library, GraphXAI. In addition to synthetic and real-world graph datasets with ground-truth explanations, GraphXAI provides data loaders, data processing functions, visualizers, GNN model implementations, and evaluation metrics to benchmark GNN explainability methods. Nature Publishing Group UK 2023-03-18 /pmc/articles/PMC10024712/ /pubmed/36934095 http://dx.doi.org/10.1038/s41597-023-01974-x Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Agarwal, Chirag Queen, Owen Lakkaraju, Himabindu Zitnik, Marinka Evaluating explainability for graph neural networks |
title | Evaluating explainability for graph neural networks |
title_full | Evaluating explainability for graph neural networks |
title_fullStr | Evaluating explainability for graph neural networks |
title_full_unstemmed | Evaluating explainability for graph neural networks |
title_short | Evaluating explainability for graph neural networks |
title_sort | evaluating explainability for graph neural networks |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10024712/ https://www.ncbi.nlm.nih.gov/pubmed/36934095 http://dx.doi.org/10.1038/s41597-023-01974-x |
work_keys_str_mv | AT agarwalchirag evaluatingexplainabilityforgraphneuralnetworks AT queenowen evaluatingexplainabilityforgraphneuralnetworks AT lakkarajuhimabindu evaluatingexplainabilityforgraphneuralnetworks AT zitnikmarinka evaluatingexplainabilityforgraphneuralnetworks |