Cargando…

Evaluating graph neural networks under graph sampling scenarios

BACKGROUND: It is often the case that only a portion of the underlying network structure is observed in real-world settings. However, as most network analysis methods are built on a complete network structure, the natural questions to ask are: (a) how well these methods perform with incomplete netwo...

Descripción completa

Detalles Bibliográficos
Autores principales: Wei, Qiang, Hu, Guangmin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9044246/
https://www.ncbi.nlm.nih.gov/pubmed/35494843
http://dx.doi.org/10.7717/peerj-cs.901
_version_ 1784695063982374912
author Wei, Qiang
Hu, Guangmin
author_facet Wei, Qiang
Hu, Guangmin
author_sort Wei, Qiang
collection PubMed
description BACKGROUND: It is often the case that only a portion of the underlying network structure is observed in real-world settings. However, as most network analysis methods are built on a complete network structure, the natural questions to ask are: (a) how well these methods perform with incomplete network structure, (b) which structural observation and network analysis method to choose for a specific task, and (c) is it beneficial to complete the missing structure. METHODS: In this paper, we consider the incomplete network structure as one random sampling instance from a complete graph, and we choose graph neural networks (GNNs), which have achieved promising results on various graph learning tasks, as the representative of network analysis methods. To identify the robustness of GNNs under graph sampling scenarios, we systemically evaluated six state-of-the-art GNNs under four commonly used graph sampling methods. RESULTS: We show that GNNs can still be applied on single static networks under graph sampling scenarios, and simpler GNN models are able to outperform more sophisticated ones in a fairly experimental procedure. More importantly, we find that completing the sampled subgraph does improve the performance of downstream tasks in most cases; however, completion is not always effective and needs to be evaluated for a specific dataset. Our code is available at https://github.com/weiqianglg/evaluate-GNNs-under-graph-sampling.
format Online
Article
Text
id pubmed-9044246
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-90442462022-04-28 Evaluating graph neural networks under graph sampling scenarios Wei, Qiang Hu, Guangmin PeerJ Comput Sci Algorithms and Analysis of Algorithms BACKGROUND: It is often the case that only a portion of the underlying network structure is observed in real-world settings. However, as most network analysis methods are built on a complete network structure, the natural questions to ask are: (a) how well these methods perform with incomplete network structure, (b) which structural observation and network analysis method to choose for a specific task, and (c) is it beneficial to complete the missing structure. METHODS: In this paper, we consider the incomplete network structure as one random sampling instance from a complete graph, and we choose graph neural networks (GNNs), which have achieved promising results on various graph learning tasks, as the representative of network analysis methods. To identify the robustness of GNNs under graph sampling scenarios, we systemically evaluated six state-of-the-art GNNs under four commonly used graph sampling methods. RESULTS: We show that GNNs can still be applied on single static networks under graph sampling scenarios, and simpler GNN models are able to outperform more sophisticated ones in a fairly experimental procedure. More importantly, we find that completing the sampled subgraph does improve the performance of downstream tasks in most cases; however, completion is not always effective and needs to be evaluated for a specific dataset. Our code is available at https://github.com/weiqianglg/evaluate-GNNs-under-graph-sampling. PeerJ Inc. 2022-03-04 /pmc/articles/PMC9044246/ /pubmed/35494843 http://dx.doi.org/10.7717/peerj-cs.901 Text en ©2022 Wei and Hu https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
spellingShingle Algorithms and Analysis of Algorithms
Wei, Qiang
Hu, Guangmin
Evaluating graph neural networks under graph sampling scenarios
title Evaluating graph neural networks under graph sampling scenarios
title_full Evaluating graph neural networks under graph sampling scenarios
title_fullStr Evaluating graph neural networks under graph sampling scenarios
title_full_unstemmed Evaluating graph neural networks under graph sampling scenarios
title_short Evaluating graph neural networks under graph sampling scenarios
title_sort evaluating graph neural networks under graph sampling scenarios
topic Algorithms and Analysis of Algorithms
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9044246/
https://www.ncbi.nlm.nih.gov/pubmed/35494843
http://dx.doi.org/10.7717/peerj-cs.901
work_keys_str_mv AT weiqiang evaluatinggraphneuralnetworksundergraphsamplingscenarios
AT huguangmin evaluatinggraphneuralnetworksundergraphsamplingscenarios