Cargando…

Derivative-free optimization adversarial attacks for graph convolutional networks

In recent years, graph convolutional networks (GCNs) have emerged rapidly due to their excellent performance in graph data processing. However, recent researches show that GCNs are vulnerable to adversarial attacks. An attacker can maliciously modify edges or nodes of the graph to mislead the model’...

Descripción completa

Detalles Bibliográficos
Autores principales: Yang, Runze, Long, Teng
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8409335/
https://www.ncbi.nlm.nih.gov/pubmed/34541312
http://dx.doi.org/10.7717/peerj-cs.693
_version_ 1783746977324859392
author Yang, Runze
Long, Teng
author_facet Yang, Runze
Long, Teng
author_sort Yang, Runze
collection PubMed
description In recent years, graph convolutional networks (GCNs) have emerged rapidly due to their excellent performance in graph data processing. However, recent researches show that GCNs are vulnerable to adversarial attacks. An attacker can maliciously modify edges or nodes of the graph to mislead the model’s classification of the target nodes, or even cause a degradation of the model’s overall classification performance. In this paper, we first propose a black-box adversarial attack framework based on derivative-free optimization (DFO) to generate graph adversarial examples without using gradient and apply advanced DFO algorithms conveniently. Second, we implement a direct attack algorithm (DFDA) using the Nevergrad library based on the framework. Additionally, we overcome the problem of large search space by redesigning the perturbation vector using constraint size. Finally, we conducted a series of experiments on different datasets and parameters. The results show that DFDA outperforms Nettack in most cases, and it can achieve an average attack success rate of more than 95% on the Cora dataset when perturbing at most eight edges. This demonstrates that our framework can fully exploit the potential of DFO methods in node classification adversarial attacks.
format Online
Article
Text
id pubmed-8409335
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-84093352021-09-17 Derivative-free optimization adversarial attacks for graph convolutional networks Yang, Runze Long, Teng PeerJ Comput Sci Artificial Intelligence In recent years, graph convolutional networks (GCNs) have emerged rapidly due to their excellent performance in graph data processing. However, recent researches show that GCNs are vulnerable to adversarial attacks. An attacker can maliciously modify edges or nodes of the graph to mislead the model’s classification of the target nodes, or even cause a degradation of the model’s overall classification performance. In this paper, we first propose a black-box adversarial attack framework based on derivative-free optimization (DFO) to generate graph adversarial examples without using gradient and apply advanced DFO algorithms conveniently. Second, we implement a direct attack algorithm (DFDA) using the Nevergrad library based on the framework. Additionally, we overcome the problem of large search space by redesigning the perturbation vector using constraint size. Finally, we conducted a series of experiments on different datasets and parameters. The results show that DFDA outperforms Nettack in most cases, and it can achieve an average attack success rate of more than 95% on the Cora dataset when perturbing at most eight edges. This demonstrates that our framework can fully exploit the potential of DFO methods in node classification adversarial attacks. PeerJ Inc. 2021-08-24 /pmc/articles/PMC8409335/ /pubmed/34541312 http://dx.doi.org/10.7717/peerj-cs.693 Text en © 2021 Yang and Long https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
spellingShingle Artificial Intelligence
Yang, Runze
Long, Teng
Derivative-free optimization adversarial attacks for graph convolutional networks
title Derivative-free optimization adversarial attacks for graph convolutional networks
title_full Derivative-free optimization adversarial attacks for graph convolutional networks
title_fullStr Derivative-free optimization adversarial attacks for graph convolutional networks
title_full_unstemmed Derivative-free optimization adversarial attacks for graph convolutional networks
title_short Derivative-free optimization adversarial attacks for graph convolutional networks
title_sort derivative-free optimization adversarial attacks for graph convolutional networks
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8409335/
https://www.ncbi.nlm.nih.gov/pubmed/34541312
http://dx.doi.org/10.7717/peerj-cs.693
work_keys_str_mv AT yangrunze derivativefreeoptimizationadversarialattacksforgraphconvolutionalnetworks
AT longteng derivativefreeoptimizationadversarialattacksforgraphconvolutionalnetworks