Cargando…

LPF-Defense: 3D adversarial defense based on frequency analysis

The 3D point clouds are increasingly being used in various application including safety-critical fields. It has recently been demonstrated that deep neural networks can successfully process 3D point clouds. However, these deep networks can be misclassified via 3D adversarial attacks intentionality d...

Descripción completa

Detalles Bibliográficos
Autores principales: Naderi, Hanieh, Noorbakhsh, Kimia, Etemadi, Arian, Kasaei, Shohreh
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9901796/
https://www.ncbi.nlm.nih.gov/pubmed/36745627
http://dx.doi.org/10.1371/journal.pone.0271388
_version_ 1784883098270302208
author Naderi, Hanieh
Noorbakhsh, Kimia
Etemadi, Arian
Kasaei, Shohreh
author_facet Naderi, Hanieh
Noorbakhsh, Kimia
Etemadi, Arian
Kasaei, Shohreh
author_sort Naderi, Hanieh
collection PubMed
description The 3D point clouds are increasingly being used in various application including safety-critical fields. It has recently been demonstrated that deep neural networks can successfully process 3D point clouds. However, these deep networks can be misclassified via 3D adversarial attacks intentionality designed to perturb some point cloud’s features. These misclassifications may be due to the network’s overreliance on features with unnecessary information in training sets. As such, identifying the features used by deep classifiers and removing features with unnecessary information from the training data can improve network’s robustness against adversarial attacks. In this paper, the LPF-Defense framework is proposed to discard this unnecessary information from the training data by suppressing the high-frequency content in the training phase. Our analysis shows that adversarial perturbations are found in the high-frequency contents of adversarial point clouds. Experiments showed that the proposed defense method achieves the state-of-the-art defense performance against six adversarial attacks on PointNet, PointNet++, and DGCNN models. The findings are practically supported by an expansive evaluation of synthetic (ModelNet40 and ShapeNet) and real datasets (ScanObjectNN). In particular, improvements are achieved with an average increase of classification accuracy by 3.8% on Drop100 attack and 4.26% on Drop200 attack compared to the state-of-the-art methods. The method also improves models’ accuracy on the original dataset compared to other available methods. (To facilitate research in this area, an open-source implementation of the method and data is released at https://github.com/kimianoorbakhsh/LPF-Defense.).
format Online
Article
Text
id pubmed-9901796
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-99017962023-02-07 LPF-Defense: 3D adversarial defense based on frequency analysis Naderi, Hanieh Noorbakhsh, Kimia Etemadi, Arian Kasaei, Shohreh PLoS One Research Article The 3D point clouds are increasingly being used in various application including safety-critical fields. It has recently been demonstrated that deep neural networks can successfully process 3D point clouds. However, these deep networks can be misclassified via 3D adversarial attacks intentionality designed to perturb some point cloud’s features. These misclassifications may be due to the network’s overreliance on features with unnecessary information in training sets. As such, identifying the features used by deep classifiers and removing features with unnecessary information from the training data can improve network’s robustness against adversarial attacks. In this paper, the LPF-Defense framework is proposed to discard this unnecessary information from the training data by suppressing the high-frequency content in the training phase. Our analysis shows that adversarial perturbations are found in the high-frequency contents of adversarial point clouds. Experiments showed that the proposed defense method achieves the state-of-the-art defense performance against six adversarial attacks on PointNet, PointNet++, and DGCNN models. The findings are practically supported by an expansive evaluation of synthetic (ModelNet40 and ShapeNet) and real datasets (ScanObjectNN). In particular, improvements are achieved with an average increase of classification accuracy by 3.8% on Drop100 attack and 4.26% on Drop200 attack compared to the state-of-the-art methods. The method also improves models’ accuracy on the original dataset compared to other available methods. (To facilitate research in this area, an open-source implementation of the method and data is released at https://github.com/kimianoorbakhsh/LPF-Defense.). Public Library of Science 2023-02-06 /pmc/articles/PMC9901796/ /pubmed/36745627 http://dx.doi.org/10.1371/journal.pone.0271388 Text en © 2023 Naderi et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Naderi, Hanieh
Noorbakhsh, Kimia
Etemadi, Arian
Kasaei, Shohreh
LPF-Defense: 3D adversarial defense based on frequency analysis
title LPF-Defense: 3D adversarial defense based on frequency analysis
title_full LPF-Defense: 3D adversarial defense based on frequency analysis
title_fullStr LPF-Defense: 3D adversarial defense based on frequency analysis
title_full_unstemmed LPF-Defense: 3D adversarial defense based on frequency analysis
title_short LPF-Defense: 3D adversarial defense based on frequency analysis
title_sort lpf-defense: 3d adversarial defense based on frequency analysis
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9901796/
https://www.ncbi.nlm.nih.gov/pubmed/36745627
http://dx.doi.org/10.1371/journal.pone.0271388
work_keys_str_mv AT naderihanieh lpfdefense3dadversarialdefensebasedonfrequencyanalysis
AT noorbakhshkimia lpfdefense3dadversarialdefensebasedonfrequencyanalysis
AT etemadiarian lpfdefense3dadversarialdefensebasedonfrequencyanalysis
AT kasaeishohreh lpfdefense3dadversarialdefensebasedonfrequencyanalysis