Cargando…

Fooling the Big Picture in Classification Tasks

Minimally perturbed adversarial examples were shown to drastically reduce the performance of one-stage classifiers while being imperceptible. This paper investigates the susceptibility of hierarchical classifiers, which use fine and coarse level output categories, to adversarial attacks. We formulat...

Descripción completa

Detalles Bibliográficos
Autores principales: Alkhouri, Ismail, Atia, George, Mikhael, Wasfy
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9638414/
https://www.ncbi.nlm.nih.gov/pubmed/36373009
http://dx.doi.org/10.1007/s00034-022-02226-w
_version_ 1784825408978419712
author Alkhouri, Ismail
Atia, George
Mikhael, Wasfy
author_facet Alkhouri, Ismail
Atia, George
Mikhael, Wasfy
author_sort Alkhouri, Ismail
collection PubMed
description Minimally perturbed adversarial examples were shown to drastically reduce the performance of one-stage classifiers while being imperceptible. This paper investigates the susceptibility of hierarchical classifiers, which use fine and coarse level output categories, to adversarial attacks. We formulate a program that encodes minimax constraints to induce misclassification of the coarse class of a hierarchical classifier (e.g., changing the prediction of a ‘monkey’ to a ‘vehicle’ instead of some ‘animal’). Subsequently, we develop solutions based on convex relaxations of said program. An algorithm is obtained using the alternating direction method of multipliers with competitive performance in comparison with state-of-the-art solvers. We show the ability of our approach to fool the coarse classification through a set of measures such as the relative loss in coarse classification accuracy and imperceptibility factors. In comparison with perturbations generated for one-stage classifiers, we show that fooling a classifier about the ‘big picture’ requires higher perturbation levels which results in lower imperceptibility. We also examine the impact of different label groupings on the performance of the proposed attacks. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00034-022-02226-w.
format Online
Article
Text
id pubmed-9638414
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-96384142022-11-07 Fooling the Big Picture in Classification Tasks Alkhouri, Ismail Atia, George Mikhael, Wasfy Circuits Syst Signal Process Article Minimally perturbed adversarial examples were shown to drastically reduce the performance of one-stage classifiers while being imperceptible. This paper investigates the susceptibility of hierarchical classifiers, which use fine and coarse level output categories, to adversarial attacks. We formulate a program that encodes minimax constraints to induce misclassification of the coarse class of a hierarchical classifier (e.g., changing the prediction of a ‘monkey’ to a ‘vehicle’ instead of some ‘animal’). Subsequently, we develop solutions based on convex relaxations of said program. An algorithm is obtained using the alternating direction method of multipliers with competitive performance in comparison with state-of-the-art solvers. We show the ability of our approach to fool the coarse classification through a set of measures such as the relative loss in coarse classification accuracy and imperceptibility factors. In comparison with perturbations generated for one-stage classifiers, we show that fooling a classifier about the ‘big picture’ requires higher perturbation levels which results in lower imperceptibility. We also examine the impact of different label groupings on the performance of the proposed attacks. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00034-022-02226-w. Springer US 2022-11-06 2023 /pmc/articles/PMC9638414/ /pubmed/36373009 http://dx.doi.org/10.1007/s00034-022-02226-w Text en © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022, Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Alkhouri, Ismail
Atia, George
Mikhael, Wasfy
Fooling the Big Picture in Classification Tasks
title Fooling the Big Picture in Classification Tasks
title_full Fooling the Big Picture in Classification Tasks
title_fullStr Fooling the Big Picture in Classification Tasks
title_full_unstemmed Fooling the Big Picture in Classification Tasks
title_short Fooling the Big Picture in Classification Tasks
title_sort fooling the big picture in classification tasks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9638414/
https://www.ncbi.nlm.nih.gov/pubmed/36373009
http://dx.doi.org/10.1007/s00034-022-02226-w
work_keys_str_mv AT alkhouriismail foolingthebigpictureinclassificationtasks
AT atiageorge foolingthebigpictureinclassificationtasks
AT mikhaelwasfy foolingthebigpictureinclassificationtasks