Cargando…
Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples
Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conduc...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8856883/ https://www.ncbi.nlm.nih.gov/pubmed/35221776 http://dx.doi.org/10.1007/s11042-022-12132-7 |
_version_ | 1784653937020764160 |
---|---|
author | Tuna, Omer Faruk Catak, Ferhat Ozgur Eskil, M. Taner |
author_facet | Tuna, Omer Faruk Catak, Ferhat Ozgur Eskil, M. Taner |
author_sort | Tuna, Omer Faruk |
collection | PubMed |
description | Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called ``Adversarial Machine Learning” to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, most of the current research has concentrated on utilising model loss function to craft adversarial examples or to create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the shifted-domain regions where the model has not been trained on. We proposed new attack ideas by exploiting the difficulty of the target model to discriminate between samples drawn from original and shifted versions of the training data distribution by utilizing epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.14%, 82.96% to 90.13% and 89.44% to 91.06% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively. |
format | Online Article Text |
id | pubmed-8856883 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer US |
record_format | MEDLINE/PubMed |
spelling | pubmed-88568832022-02-22 Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples Tuna, Omer Faruk Catak, Ferhat Ozgur Eskil, M. Taner Multimed Tools Appl Article Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called ``Adversarial Machine Learning” to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, most of the current research has concentrated on utilising model loss function to craft adversarial examples or to create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the shifted-domain regions where the model has not been trained on. We proposed new attack ideas by exploiting the difficulty of the target model to discriminate between samples drawn from original and shifted versions of the training data distribution by utilizing epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.14%, 82.96% to 90.13% and 89.44% to 91.06% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively. Springer US 2022-02-18 2022 /pmc/articles/PMC8856883/ /pubmed/35221776 http://dx.doi.org/10.1007/s11042-022-12132-7 Text en © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Article Tuna, Omer Faruk Catak, Ferhat Ozgur Eskil, M. Taner Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples |
title | Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples |
title_full | Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples |
title_fullStr | Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples |
title_full_unstemmed | Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples |
title_short | Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples |
title_sort | exploiting epistemic uncertainty of the deep learning models to generate adversarial samples |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8856883/ https://www.ncbi.nlm.nih.gov/pubmed/35221776 http://dx.doi.org/10.1007/s11042-022-12132-7 |
work_keys_str_mv | AT tunaomerfaruk exploitingepistemicuncertaintyofthedeeplearningmodelstogenerateadversarialsamples AT catakferhatozgur exploitingepistemicuncertaintyofthedeeplearningmodelstogenerateadversarialsamples AT eskilmtaner exploitingepistemicuncertaintyofthedeeplearningmodelstogenerateadversarialsamples |