Cargando…

Extended Spatially Localized Perturbation GAN (eSLP-GAN) for Robust Adversarial Camouflage Patches †

Deep neural networks (DNNs), especially those used in computer vision, are highly vulnerable to adversarial attacks, such as adversarial perturbations and adversarial patches. Adversarial patches, often considered more appropriate for a real-world attack, are attached to the target object or its sur...

Descripción completa

Detalles Bibliográficos
Autores principales: Kim, Yongsu, Kang, Hyoeun, Suryanto, Naufal, Larasati, Harashta Tatimma, Mukaroh, Afifatul, Kim, Howon
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8398873/
https://www.ncbi.nlm.nih.gov/pubmed/34450763
http://dx.doi.org/10.3390/s21165323
_version_ 1783744942025211904
author Kim, Yongsu
Kang, Hyoeun
Suryanto, Naufal
Larasati, Harashta Tatimma
Mukaroh, Afifatul
Kim, Howon
author_facet Kim, Yongsu
Kang, Hyoeun
Suryanto, Naufal
Larasati, Harashta Tatimma
Mukaroh, Afifatul
Kim, Howon
author_sort Kim, Yongsu
collection PubMed
description Deep neural networks (DNNs), especially those used in computer vision, are highly vulnerable to adversarial attacks, such as adversarial perturbations and adversarial patches. Adversarial patches, often considered more appropriate for a real-world attack, are attached to the target object or its surroundings to deceive the target system. However, most previous research employed adversarial patches that are conspicuous to human vision, making them easy to identify and counter. Previously, the spatially localized perturbation GAN (SLP-GAN) was proposed, in which the perturbation was only added to the most representative area of the input images, creating a spatially localized adversarial camouflage patch that excels in terms of visual fidelity and is, therefore, difficult to detect by human vision. In this study, the use of the method called eSLP-GAN was extended to deceive classifiers and object detection systems. Specifically, the loss function was modified for greater compatibility with an object-detection model attack and to increase robustness in the real world. Furthermore, the applicability of the proposed method was tested on the CARLA simulator for a more authentic real-world attack scenario.
format Online
Article
Text
id pubmed-8398873
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-83988732021-08-29 Extended Spatially Localized Perturbation GAN (eSLP-GAN) for Robust Adversarial Camouflage Patches † Kim, Yongsu Kang, Hyoeun Suryanto, Naufal Larasati, Harashta Tatimma Mukaroh, Afifatul Kim, Howon Sensors (Basel) Article Deep neural networks (DNNs), especially those used in computer vision, are highly vulnerable to adversarial attacks, such as adversarial perturbations and adversarial patches. Adversarial patches, often considered more appropriate for a real-world attack, are attached to the target object or its surroundings to deceive the target system. However, most previous research employed adversarial patches that are conspicuous to human vision, making them easy to identify and counter. Previously, the spatially localized perturbation GAN (SLP-GAN) was proposed, in which the perturbation was only added to the most representative area of the input images, creating a spatially localized adversarial camouflage patch that excels in terms of visual fidelity and is, therefore, difficult to detect by human vision. In this study, the use of the method called eSLP-GAN was extended to deceive classifiers and object detection systems. Specifically, the loss function was modified for greater compatibility with an object-detection model attack and to increase robustness in the real world. Furthermore, the applicability of the proposed method was tested on the CARLA simulator for a more authentic real-world attack scenario. MDPI 2021-08-06 /pmc/articles/PMC8398873/ /pubmed/34450763 http://dx.doi.org/10.3390/s21165323 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Kim, Yongsu
Kang, Hyoeun
Suryanto, Naufal
Larasati, Harashta Tatimma
Mukaroh, Afifatul
Kim, Howon
Extended Spatially Localized Perturbation GAN (eSLP-GAN) for Robust Adversarial Camouflage Patches †
title Extended Spatially Localized Perturbation GAN (eSLP-GAN) for Robust Adversarial Camouflage Patches †
title_full Extended Spatially Localized Perturbation GAN (eSLP-GAN) for Robust Adversarial Camouflage Patches †
title_fullStr Extended Spatially Localized Perturbation GAN (eSLP-GAN) for Robust Adversarial Camouflage Patches †
title_full_unstemmed Extended Spatially Localized Perturbation GAN (eSLP-GAN) for Robust Adversarial Camouflage Patches †
title_short Extended Spatially Localized Perturbation GAN (eSLP-GAN) for Robust Adversarial Camouflage Patches †
title_sort extended spatially localized perturbation gan (eslp-gan) for robust adversarial camouflage patches †
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8398873/
https://www.ncbi.nlm.nih.gov/pubmed/34450763
http://dx.doi.org/10.3390/s21165323
work_keys_str_mv AT kimyongsu extendedspatiallylocalizedperturbationganeslpganforrobustadversarialcamouflagepatches
AT kanghyoeun extendedspatiallylocalizedperturbationganeslpganforrobustadversarialcamouflagepatches
AT suryantonaufal extendedspatiallylocalizedperturbationganeslpganforrobustadversarialcamouflagepatches
AT larasatiharashtatatimma extendedspatiallylocalizedperturbationganeslpganforrobustadversarialcamouflagepatches
AT mukarohafifatul extendedspatiallylocalizedperturbationganeslpganforrobustadversarialcamouflagepatches
AT kimhowon extendedspatiallylocalizedperturbationganeslpganforrobustadversarialcamouflagepatches