Cargando…
CEB Improves Model Robustness
Intuitively, one way to make classifiers more robust to their input is to have them depend less sensitively on their input. The Information Bottleneck (IB) tries to learn compressed representations of input that are still predictive. Scaling up IB approaches to large scale image classification tasks...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7597163/ https://www.ncbi.nlm.nih.gov/pubmed/33286850 http://dx.doi.org/10.3390/e22101081 |
_version_ | 1783602279468761088 |
---|---|
author | Fischer, Ian Alemi, Alexander A. |
author_facet | Fischer, Ian Alemi, Alexander A. |
author_sort | Fischer, Ian |
collection | PubMed |
description | Intuitively, one way to make classifiers more robust to their input is to have them depend less sensitively on their input. The Information Bottleneck (IB) tries to learn compressed representations of input that are still predictive. Scaling up IB approaches to large scale image classification tasks has proved difficult. We demonstrate that the Conditional Entropy Bottleneck (CEB) can not only scale up to large scale image classification tasks, but can additionally improve model robustness. CEB is an easy strategy to implement and works in tandem with data augmentation procedures. We report results of a large scale adversarial robustness study on CIFAR-10, as well as the ImageNet-C Common Corruptions Benchmark, ImageNet-A, and PGD attacks. |
format | Online Article Text |
id | pubmed-7597163 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-75971632020-11-09 CEB Improves Model Robustness Fischer, Ian Alemi, Alexander A. Entropy (Basel) Article Intuitively, one way to make classifiers more robust to their input is to have them depend less sensitively on their input. The Information Bottleneck (IB) tries to learn compressed representations of input that are still predictive. Scaling up IB approaches to large scale image classification tasks has proved difficult. We demonstrate that the Conditional Entropy Bottleneck (CEB) can not only scale up to large scale image classification tasks, but can additionally improve model robustness. CEB is an easy strategy to implement and works in tandem with data augmentation procedures. We report results of a large scale adversarial robustness study on CIFAR-10, as well as the ImageNet-C Common Corruptions Benchmark, ImageNet-A, and PGD attacks. MDPI 2020-09-25 /pmc/articles/PMC7597163/ /pubmed/33286850 http://dx.doi.org/10.3390/e22101081 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Fischer, Ian Alemi, Alexander A. CEB Improves Model Robustness |
title | CEB Improves Model Robustness |
title_full | CEB Improves Model Robustness |
title_fullStr | CEB Improves Model Robustness |
title_full_unstemmed | CEB Improves Model Robustness |
title_short | CEB Improves Model Robustness |
title_sort | ceb improves model robustness |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7597163/ https://www.ncbi.nlm.nih.gov/pubmed/33286850 http://dx.doi.org/10.3390/e22101081 |
work_keys_str_mv | AT fischerian cebimprovesmodelrobustness AT alemialexandera cebimprovesmodelrobustness |