Cargando…

Universal adversarial examples and perturbations for quantum classifiers

Quantum machine learning explores the interplay between machine learning and quantum physics, which may lead to unprecedented perspectives for both fields. In fact, recent works have shown strong evidence that quantum computers could outperform classical computers in solving certain notable machine...

Descripción completa

Detalles Bibliográficos
Autores principales: Gong, Weiyuan, Deng, Dong-Ling
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9796671/
https://www.ncbi.nlm.nih.gov/pubmed/36590599
http://dx.doi.org/10.1093/nsr/nwab130
_version_ 1784860539511373824
author Gong, Weiyuan
Deng, Dong-Ling
author_facet Gong, Weiyuan
Deng, Dong-Ling
author_sort Gong, Weiyuan
collection PubMed
description Quantum machine learning explores the interplay between machine learning and quantum physics, which may lead to unprecedented perspectives for both fields. In fact, recent works have shown strong evidence that quantum computers could outperform classical computers in solving certain notable machine learning tasks. Yet, quantum learning systems may also suffer from the vulnerability problem: adding a tiny carefully crafted perturbation to the legitimate input data would cause the systems to make incorrect predictions at a notably high confidence level. In this paper, we study the universality of adversarial examples and perturbations for quantum classifiers. Through concrete examples involving classifications of real-life images and quantum phases of matter, we show that there exist universal adversarial examples that can fool a set of different quantum classifiers. We prove that, for a set of k classifiers with each receiving input data of n qubits, an O(ln [k]/2(n)) increase of the perturbation strength is enough to ensure a moderate universal adversarial risk. In addition, for a given quantum classifier, we show that there exist universal adversarial perturbations, which can be added to different legitimate samples to make them adversarial examples for the classifier. Our results reveal the universality perspective of adversarial attacks for quantum machine learning systems, which would be crucial for practical applications of both near-term and future quantum technologies in solving machine learning problems.
format Online
Article
Text
id pubmed-9796671
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Oxford University Press
record_format MEDLINE/PubMed
spelling pubmed-97966712022-12-30 Universal adversarial examples and perturbations for quantum classifiers Gong, Weiyuan Deng, Dong-Ling Natl Sci Rev Research Article Quantum machine learning explores the interplay between machine learning and quantum physics, which may lead to unprecedented perspectives for both fields. In fact, recent works have shown strong evidence that quantum computers could outperform classical computers in solving certain notable machine learning tasks. Yet, quantum learning systems may also suffer from the vulnerability problem: adding a tiny carefully crafted perturbation to the legitimate input data would cause the systems to make incorrect predictions at a notably high confidence level. In this paper, we study the universality of adversarial examples and perturbations for quantum classifiers. Through concrete examples involving classifications of real-life images and quantum phases of matter, we show that there exist universal adversarial examples that can fool a set of different quantum classifiers. We prove that, for a set of k classifiers with each receiving input data of n qubits, an O(ln [k]/2(n)) increase of the perturbation strength is enough to ensure a moderate universal adversarial risk. In addition, for a given quantum classifier, we show that there exist universal adversarial perturbations, which can be added to different legitimate samples to make them adversarial examples for the classifier. Our results reveal the universality perspective of adversarial attacks for quantum machine learning systems, which would be crucial for practical applications of both near-term and future quantum technologies in solving machine learning problems. Oxford University Press 2021-07-22 /pmc/articles/PMC9796671/ /pubmed/36590599 http://dx.doi.org/10.1093/nsr/nwab130 Text en © The Author(s) 2021. Published by Oxford University Press on behalf of China Science Publishing & Media Ltd. https://creativecommons.org/licenses/by/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Gong, Weiyuan
Deng, Dong-Ling
Universal adversarial examples and perturbations for quantum classifiers
title Universal adversarial examples and perturbations for quantum classifiers
title_full Universal adversarial examples and perturbations for quantum classifiers
title_fullStr Universal adversarial examples and perturbations for quantum classifiers
title_full_unstemmed Universal adversarial examples and perturbations for quantum classifiers
title_short Universal adversarial examples and perturbations for quantum classifiers
title_sort universal adversarial examples and perturbations for quantum classifiers
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9796671/
https://www.ncbi.nlm.nih.gov/pubmed/36590599
http://dx.doi.org/10.1093/nsr/nwab130
work_keys_str_mv AT gongweiyuan universaladversarialexamplesandperturbationsforquantumclassifiers
AT dengdongling universaladversarialexamplesandperturbationsforquantumclassifiers