Cargando…

Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface

A backdoor attack is a type of attack method that induces deep neural network (DNN) misclassification. The adversary who aims to trigger the backdoor attack inputs the image with a specific pattern (the adversarial mark) into the DNN model (backdoor model). In general, the adversary mark is created...

Descripción completa

Detalles Bibliográficos
Autores principales: Oyama, Tatsuya, Okura, Shunsuke, Yoshida, Kota, Fujino, Takeshi
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10220730/
https://www.ncbi.nlm.nih.gov/pubmed/37430657
http://dx.doi.org/10.3390/s23104742
_version_ 1785049287399309312
author Oyama, Tatsuya
Okura, Shunsuke
Yoshida, Kota
Fujino, Takeshi
author_facet Oyama, Tatsuya
Okura, Shunsuke
Yoshida, Kota
Fujino, Takeshi
author_sort Oyama, Tatsuya
collection PubMed
description A backdoor attack is a type of attack method that induces deep neural network (DNN) misclassification. The adversary who aims to trigger the backdoor attack inputs the image with a specific pattern (the adversarial mark) into the DNN model (backdoor model). In general, the adversary mark is created on the physical object input to an image by capturing a photo. With this conventional method, the success of the backdoor attack is not stable because the size and position change depending on the shooting environment. So far, we have proposed a method of creating an adversarial mark for triggering backdoor attacks by means of a fault injection attack on the mobile industry processor interface (MIPI), which is the image sensor interface. We propose the image tampering model, with which the adversarial mark can be generated in the actual fault injection to create the adversarial mark pattern. Then, the backdoor model was trained with poison data images, which the proposed simulation model created. We conducted a backdoor attack experiment using a backdoor model trained on a dataset containing 5% poison data. The clean data accuracy in normal operation was 91%; nevertheless, the attack success rate with fault injection was 83%.
format Online
Article
Text
id pubmed-10220730
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-102207302023-05-28 Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface Oyama, Tatsuya Okura, Shunsuke Yoshida, Kota Fujino, Takeshi Sensors (Basel) Article A backdoor attack is a type of attack method that induces deep neural network (DNN) misclassification. The adversary who aims to trigger the backdoor attack inputs the image with a specific pattern (the adversarial mark) into the DNN model (backdoor model). In general, the adversary mark is created on the physical object input to an image by capturing a photo. With this conventional method, the success of the backdoor attack is not stable because the size and position change depending on the shooting environment. So far, we have proposed a method of creating an adversarial mark for triggering backdoor attacks by means of a fault injection attack on the mobile industry processor interface (MIPI), which is the image sensor interface. We propose the image tampering model, with which the adversarial mark can be generated in the actual fault injection to create the adversarial mark pattern. Then, the backdoor model was trained with poison data images, which the proposed simulation model created. We conducted a backdoor attack experiment using a backdoor model trained on a dataset containing 5% poison data. The clean data accuracy in normal operation was 91%; nevertheless, the attack success rate with fault injection was 83%. MDPI 2023-05-14 /pmc/articles/PMC10220730/ /pubmed/37430657 http://dx.doi.org/10.3390/s23104742 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Oyama, Tatsuya
Okura, Shunsuke
Yoshida, Kota
Fujino, Takeshi
Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface
title Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface
title_full Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface
title_fullStr Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface
title_full_unstemmed Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface
title_short Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface
title_sort backdoor attack on deep neural networks triggered by fault injection attack on image sensor interface
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10220730/
https://www.ncbi.nlm.nih.gov/pubmed/37430657
http://dx.doi.org/10.3390/s23104742
work_keys_str_mv AT oyamatatsuya backdoorattackondeepneuralnetworkstriggeredbyfaultinjectionattackonimagesensorinterface
AT okurashunsuke backdoorattackondeepneuralnetworkstriggeredbyfaultinjectionattackonimagesensorinterface
AT yoshidakota backdoorattackondeepneuralnetworkstriggeredbyfaultinjectionattackonimagesensorinterface
AT fujinotakeshi backdoorattackondeepneuralnetworkstriggeredbyfaultinjectionattackonimagesensorinterface