Cargando…

A Lightweight Method for Defense Graph Neural Networks Adversarial Attacks

Graph neural network has been widely used in various fields in recent years. However, the appearance of an adversarial attack makes the reliability of the existing neural networks challenging in application. Premeditated attackers, can make very small perturbations to the data to fool the neural net...

Descripción completa

Detalles Bibliográficos
Autores principales: Qiao, Zhi, Wu, Zhenqiang, Chen, Jiawang, Ren, Ping’an, Yu, Zhiliang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9858433/
https://www.ncbi.nlm.nih.gov/pubmed/36673179
http://dx.doi.org/10.3390/e25010039
_version_ 1784874098931793920
author Qiao, Zhi
Wu, Zhenqiang
Chen, Jiawang
Ren, Ping’an
Yu, Zhiliang
author_facet Qiao, Zhi
Wu, Zhenqiang
Chen, Jiawang
Ren, Ping’an
Yu, Zhiliang
author_sort Qiao, Zhi
collection PubMed
description Graph neural network has been widely used in various fields in recent years. However, the appearance of an adversarial attack makes the reliability of the existing neural networks challenging in application. Premeditated attackers, can make very small perturbations to the data to fool the neural network to produce wrong results. These incorrect results can lead to disastrous consequences. So, how to defend against adversarial attacks has become an urgent research topic. Many researchers have tried to improve the model robustness directly or by using adversarial training to reduce the negative impact of an adversarial attack. However, the majority of the defense strategies currently in use are inextricably linked to the model-training process, which incurs significant running and memory space costs. We offer a lightweight and easy-to-implement approach that is based on graph transformation. Extensive experiments demonstrate that our approach has a similar defense effect (with accuracy rate returns of nearly 80%) as existing methods and only uses 10% of their run time when defending against adversarial attacks on GCN (graph convolutional neural networks).
format Online
Article
Text
id pubmed-9858433
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-98584332023-01-21 A Lightweight Method for Defense Graph Neural Networks Adversarial Attacks Qiao, Zhi Wu, Zhenqiang Chen, Jiawang Ren, Ping’an Yu, Zhiliang Entropy (Basel) Article Graph neural network has been widely used in various fields in recent years. However, the appearance of an adversarial attack makes the reliability of the existing neural networks challenging in application. Premeditated attackers, can make very small perturbations to the data to fool the neural network to produce wrong results. These incorrect results can lead to disastrous consequences. So, how to defend against adversarial attacks has become an urgent research topic. Many researchers have tried to improve the model robustness directly or by using adversarial training to reduce the negative impact of an adversarial attack. However, the majority of the defense strategies currently in use are inextricably linked to the model-training process, which incurs significant running and memory space costs. We offer a lightweight and easy-to-implement approach that is based on graph transformation. Extensive experiments demonstrate that our approach has a similar defense effect (with accuracy rate returns of nearly 80%) as existing methods and only uses 10% of their run time when defending against adversarial attacks on GCN (graph convolutional neural networks). MDPI 2022-12-25 /pmc/articles/PMC9858433/ /pubmed/36673179 http://dx.doi.org/10.3390/e25010039 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Qiao, Zhi
Wu, Zhenqiang
Chen, Jiawang
Ren, Ping’an
Yu, Zhiliang
A Lightweight Method for Defense Graph Neural Networks Adversarial Attacks
title A Lightweight Method for Defense Graph Neural Networks Adversarial Attacks
title_full A Lightweight Method for Defense Graph Neural Networks Adversarial Attacks
title_fullStr A Lightweight Method for Defense Graph Neural Networks Adversarial Attacks
title_full_unstemmed A Lightweight Method for Defense Graph Neural Networks Adversarial Attacks
title_short A Lightweight Method for Defense Graph Neural Networks Adversarial Attacks
title_sort lightweight method for defense graph neural networks adversarial attacks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9858433/
https://www.ncbi.nlm.nih.gov/pubmed/36673179
http://dx.doi.org/10.3390/e25010039
work_keys_str_mv AT qiaozhi alightweightmethodfordefensegraphneuralnetworksadversarialattacks
AT wuzhenqiang alightweightmethodfordefensegraphneuralnetworksadversarialattacks
AT chenjiawang alightweightmethodfordefensegraphneuralnetworksadversarialattacks
AT renpingan alightweightmethodfordefensegraphneuralnetworksadversarialattacks
AT yuzhiliang alightweightmethodfordefensegraphneuralnetworksadversarialattacks
AT qiaozhi lightweightmethodfordefensegraphneuralnetworksadversarialattacks
AT wuzhenqiang lightweightmethodfordefensegraphneuralnetworksadversarialattacks
AT chenjiawang lightweightmethodfordefensegraphneuralnetworksadversarialattacks
AT renpingan lightweightmethodfordefensegraphneuralnetworksadversarialattacks
AT yuzhiliang lightweightmethodfordefensegraphneuralnetworksadversarialattacks