Cargando…

Auto-GNN: Neural architecture search of graph neural networks

Graph neural networks (GNNs) have been widely used in various graph analysis tasks. As the graph characteristics vary significantly in real-world systems, given a specific scenario, the architecture parameters need to be tuned carefully to identify a suitable GNN. Neural architecture search (NAS) ha...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhou, Kaixiong, Huang, Xiao, Song, Qingquan, Chen, Rui, Hu, Xia
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9714572/
https://www.ncbi.nlm.nih.gov/pubmed/36466713
http://dx.doi.org/10.3389/fdata.2022.1029307
_version_ 1784842257712545792
author Zhou, Kaixiong
Huang, Xiao
Song, Qingquan
Chen, Rui
Hu, Xia
author_facet Zhou, Kaixiong
Huang, Xiao
Song, Qingquan
Chen, Rui
Hu, Xia
author_sort Zhou, Kaixiong
collection PubMed
description Graph neural networks (GNNs) have been widely used in various graph analysis tasks. As the graph characteristics vary significantly in real-world systems, given a specific scenario, the architecture parameters need to be tuned carefully to identify a suitable GNN. Neural architecture search (NAS) has shown its potential in discovering the effective architectures for the learning tasks in image and language modeling. However, the existing NAS algorithms cannot be applied efficiently to GNN search problem because of two facts. First, the large-step exploration in the traditional controller fails to learn the sensitive performance variations with slight architecture modifications in GNNs. Second, the search space is composed of heterogeneous GNNs, which prevents the direct adoption of parameter sharing among them to accelerate the search progress. To tackle the challenges, we propose an automated graph neural networks (AGNN) framework, which aims to find the optimal GNN architecture efficiently. Specifically, a reinforced conservative controller is designed to explore the architecture space with small steps. To accelerate the validation, a novel constrained parameter sharing strategy is presented to regularize the weight transferring among GNNs. It avoids training from scratch and saves the computation time. Experimental results on the benchmark datasets demonstrate that the architecture identified by AGNN achieves the best performance and search efficiency, comparing with existing human-invented models and the traditional search methods.
format Online
Article
Text
id pubmed-9714572
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-97145722022-12-02 Auto-GNN: Neural architecture search of graph neural networks Zhou, Kaixiong Huang, Xiao Song, Qingquan Chen, Rui Hu, Xia Front Big Data Big Data Graph neural networks (GNNs) have been widely used in various graph analysis tasks. As the graph characteristics vary significantly in real-world systems, given a specific scenario, the architecture parameters need to be tuned carefully to identify a suitable GNN. Neural architecture search (NAS) has shown its potential in discovering the effective architectures for the learning tasks in image and language modeling. However, the existing NAS algorithms cannot be applied efficiently to GNN search problem because of two facts. First, the large-step exploration in the traditional controller fails to learn the sensitive performance variations with slight architecture modifications in GNNs. Second, the search space is composed of heterogeneous GNNs, which prevents the direct adoption of parameter sharing among them to accelerate the search progress. To tackle the challenges, we propose an automated graph neural networks (AGNN) framework, which aims to find the optimal GNN architecture efficiently. Specifically, a reinforced conservative controller is designed to explore the architecture space with small steps. To accelerate the validation, a novel constrained parameter sharing strategy is presented to regularize the weight transferring among GNNs. It avoids training from scratch and saves the computation time. Experimental results on the benchmark datasets demonstrate that the architecture identified by AGNN achieves the best performance and search efficiency, comparing with existing human-invented models and the traditional search methods. Frontiers Media S.A. 2022-11-17 /pmc/articles/PMC9714572/ /pubmed/36466713 http://dx.doi.org/10.3389/fdata.2022.1029307 Text en Copyright © 2022 Zhou, Huang, Song, Chen and Hu. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Big Data
Zhou, Kaixiong
Huang, Xiao
Song, Qingquan
Chen, Rui
Hu, Xia
Auto-GNN: Neural architecture search of graph neural networks
title Auto-GNN: Neural architecture search of graph neural networks
title_full Auto-GNN: Neural architecture search of graph neural networks
title_fullStr Auto-GNN: Neural architecture search of graph neural networks
title_full_unstemmed Auto-GNN: Neural architecture search of graph neural networks
title_short Auto-GNN: Neural architecture search of graph neural networks
title_sort auto-gnn: neural architecture search of graph neural networks
topic Big Data
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9714572/
https://www.ncbi.nlm.nih.gov/pubmed/36466713
http://dx.doi.org/10.3389/fdata.2022.1029307
work_keys_str_mv AT zhoukaixiong autognnneuralarchitecturesearchofgraphneuralnetworks
AT huangxiao autognnneuralarchitecturesearchofgraphneuralnetworks
AT songqingquan autognnneuralarchitecturesearchofgraphneuralnetworks
AT chenrui autognnneuralarchitecturesearchofgraphneuralnetworks
AT huxia autognnneuralarchitecturesearchofgraphneuralnetworks