Cargando…

Lie to Me: Shield Your Emotions from Prying Software

Deep learning approaches for facial Emotion Recognition (ER) obtain high accuracy on basic models, e.g., Ekman’s models, in the specific domain of facial emotional expressions. Thus, facial tracking of users’ emotions could be easily used against the right to privacy or for manipulative purposes. As...

Descripción completa

Detalles Bibliográficos
Autores principales: Baia, Alina Elena, Biondi, Giulio, Franzoni, Valentina, Milani, Alfredo, Poggioni, Valentina
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8840139/
https://www.ncbi.nlm.nih.gov/pubmed/35161713
http://dx.doi.org/10.3390/s22030967
_version_ 1784650545521229824
author Baia, Alina Elena
Biondi, Giulio
Franzoni, Valentina
Milani, Alfredo
Poggioni, Valentina
author_facet Baia, Alina Elena
Biondi, Giulio
Franzoni, Valentina
Milani, Alfredo
Poggioni, Valentina
author_sort Baia, Alina Elena
collection PubMed
description Deep learning approaches for facial Emotion Recognition (ER) obtain high accuracy on basic models, e.g., Ekman’s models, in the specific domain of facial emotional expressions. Thus, facial tracking of users’ emotions could be easily used against the right to privacy or for manipulative purposes. As recent studies have shown that deep learning models are susceptible to adversarial examples (images intentionally modified to fool a machine learning classifier) we propose to use them to preserve users’ privacy against ER. In this paper, we present a technique for generating Emotion Adversarial Attacks (EAAs). EAAs are performed applying well-known image filters inspired from Instagram, and a multi-objective evolutionary algorithm is used to determine the per-image best filters attacking combination. Experimental results on the well-known AffectNet dataset of facial expressions show that our approach successfully attacks emotion classifiers to protect user privacy. On the other hand, the quality of the images from the human perception point of view is maintained. Several experiments with different sequences of filters are run and show that the Attack Success Rate is very high, above 90% for every test.
format Online
Article
Text
id pubmed-8840139
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-88401392022-02-13 Lie to Me: Shield Your Emotions from Prying Software Baia, Alina Elena Biondi, Giulio Franzoni, Valentina Milani, Alfredo Poggioni, Valentina Sensors (Basel) Article Deep learning approaches for facial Emotion Recognition (ER) obtain high accuracy on basic models, e.g., Ekman’s models, in the specific domain of facial emotional expressions. Thus, facial tracking of users’ emotions could be easily used against the right to privacy or for manipulative purposes. As recent studies have shown that deep learning models are susceptible to adversarial examples (images intentionally modified to fool a machine learning classifier) we propose to use them to preserve users’ privacy against ER. In this paper, we present a technique for generating Emotion Adversarial Attacks (EAAs). EAAs are performed applying well-known image filters inspired from Instagram, and a multi-objective evolutionary algorithm is used to determine the per-image best filters attacking combination. Experimental results on the well-known AffectNet dataset of facial expressions show that our approach successfully attacks emotion classifiers to protect user privacy. On the other hand, the quality of the images from the human perception point of view is maintained. Several experiments with different sequences of filters are run and show that the Attack Success Rate is very high, above 90% for every test. MDPI 2022-01-26 /pmc/articles/PMC8840139/ /pubmed/35161713 http://dx.doi.org/10.3390/s22030967 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Baia, Alina Elena
Biondi, Giulio
Franzoni, Valentina
Milani, Alfredo
Poggioni, Valentina
Lie to Me: Shield Your Emotions from Prying Software
title Lie to Me: Shield Your Emotions from Prying Software
title_full Lie to Me: Shield Your Emotions from Prying Software
title_fullStr Lie to Me: Shield Your Emotions from Prying Software
title_full_unstemmed Lie to Me: Shield Your Emotions from Prying Software
title_short Lie to Me: Shield Your Emotions from Prying Software
title_sort lie to me: shield your emotions from prying software
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8840139/
https://www.ncbi.nlm.nih.gov/pubmed/35161713
http://dx.doi.org/10.3390/s22030967
work_keys_str_mv AT baiaalinaelena lietomeshieldyouremotionsfrompryingsoftware
AT biondigiulio lietomeshieldyouremotionsfrompryingsoftware
AT franzonivalentina lietomeshieldyouremotionsfrompryingsoftware
AT milanialfredo lietomeshieldyouremotionsfrompryingsoftware
AT poggionivalentina lietomeshieldyouremotionsfrompryingsoftware