Cargando…

Adversarially Robust Learning via Entropic Regularization

In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks. We formulate a new loss function that is equipped with an additional entropic regularization. Our loss function considers the contribution of adversarial samples that are drawn from a...

Descripción completa

Detalles Bibliográficos
Autores principales: Jagatap, Gauri, Joshi, Ameya, Chowdhury, Animesh Basak, Garg, Siddharth, Hegde, Chinmay
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8764444/
https://www.ncbi.nlm.nih.gov/pubmed/35059637
http://dx.doi.org/10.3389/frai.2021.780843
_version_ 1784634167118528512
author Jagatap, Gauri
Joshi, Ameya
Chowdhury, Animesh Basak
Garg, Siddharth
Hegde, Chinmay
author_facet Jagatap, Gauri
Joshi, Ameya
Chowdhury, Animesh Basak
Garg, Siddharth
Hegde, Chinmay
author_sort Jagatap, Gauri
collection PubMed
description In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks. We formulate a new loss function that is equipped with an additional entropic regularization. Our loss function considers the contribution of adversarial samples that are drawn from a specially designed distribution in the data space that assigns high probability to points with high loss and in the immediate neighborhood of training samples. Our proposed algorithms optimize this loss to seek adversarially robust valleys of the loss landscape. Our approach achieves competitive (or better) performance in terms of robust classification accuracy as compared to several state-of-the-art robust learning approaches on benchmark datasets such as MNIST and CIFAR-10.
format Online
Article
Text
id pubmed-8764444
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-87644442022-01-19 Adversarially Robust Learning via Entropic Regularization Jagatap, Gauri Joshi, Ameya Chowdhury, Animesh Basak Garg, Siddharth Hegde, Chinmay Front Artif Intell Artificial Intelligence In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks. We formulate a new loss function that is equipped with an additional entropic regularization. Our loss function considers the contribution of adversarial samples that are drawn from a specially designed distribution in the data space that assigns high probability to points with high loss and in the immediate neighborhood of training samples. Our proposed algorithms optimize this loss to seek adversarially robust valleys of the loss landscape. Our approach achieves competitive (or better) performance in terms of robust classification accuracy as compared to several state-of-the-art robust learning approaches on benchmark datasets such as MNIST and CIFAR-10. Frontiers Media S.A. 2022-01-04 /pmc/articles/PMC8764444/ /pubmed/35059637 http://dx.doi.org/10.3389/frai.2021.780843 Text en Copyright © 2022 Jagatap, Joshi, Chowdhury, Garg and Hegde. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Jagatap, Gauri
Joshi, Ameya
Chowdhury, Animesh Basak
Garg, Siddharth
Hegde, Chinmay
Adversarially Robust Learning via Entropic Regularization
title Adversarially Robust Learning via Entropic Regularization
title_full Adversarially Robust Learning via Entropic Regularization
title_fullStr Adversarially Robust Learning via Entropic Regularization
title_full_unstemmed Adversarially Robust Learning via Entropic Regularization
title_short Adversarially Robust Learning via Entropic Regularization
title_sort adversarially robust learning via entropic regularization
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8764444/
https://www.ncbi.nlm.nih.gov/pubmed/35059637
http://dx.doi.org/10.3389/frai.2021.780843
work_keys_str_mv AT jagatapgauri adversariallyrobustlearningviaentropicregularization
AT joshiameya adversariallyrobustlearningviaentropicregularization
AT chowdhuryanimeshbasak adversariallyrobustlearningviaentropicregularization
AT gargsiddharth adversariallyrobustlearningviaentropicregularization
AT hegdechinmay adversariallyrobustlearningviaentropicregularization