Cargando…

On the benefits of representation regularization in invariance based domain generalization

A crucial aspect of reliable machine learning is to design a deployable system for generalizing new related but unobserved environments. Domain generalization aims to alleviate such a prediction gap between the observed and unseen environments. Previous approaches commonly incorporated learning the...

Descripción completa

Detalles Bibliográficos
Autores principales: Shui, Changjian, Wang, Boyu, Gagné, Christian
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9012768/
https://www.ncbi.nlm.nih.gov/pubmed/35510180
http://dx.doi.org/10.1007/s10994-021-06080-w
_version_ 1784687861485797376
author Shui, Changjian
Wang, Boyu
Gagné, Christian
author_facet Shui, Changjian
Wang, Boyu
Gagné, Christian
author_sort Shui, Changjian
collection PubMed
description A crucial aspect of reliable machine learning is to design a deployable system for generalizing new related but unobserved environments. Domain generalization aims to alleviate such a prediction gap between the observed and unseen environments. Previous approaches commonly incorporated learning the invariant representation for achieving good empirical performance. In this paper, we reveal that merely learning the invariant representation is vulnerable to the related unseen environment. To this end, we derive a novel theoretical analysis to control the unseen test environment error in the representation learning, which highlights the importance of controlling the smoothness of representation. In practice, our analysis further inspires an efficient regularization method to improve the robustness in domain generalization. The proposed regularization is orthogonal to and can be straightforwardly adopted in existing domain generalization algorithms that ensure invariant representation learning. Empirical results show that our algorithm outperforms the base versions in various datasets and invariance criteria.
format Online
Article
Text
id pubmed-9012768
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-90127682022-05-02 On the benefits of representation regularization in invariance based domain generalization Shui, Changjian Wang, Boyu Gagné, Christian Mach Learn Article A crucial aspect of reliable machine learning is to design a deployable system for generalizing new related but unobserved environments. Domain generalization aims to alleviate such a prediction gap between the observed and unseen environments. Previous approaches commonly incorporated learning the invariant representation for achieving good empirical performance. In this paper, we reveal that merely learning the invariant representation is vulnerable to the related unseen environment. To this end, we derive a novel theoretical analysis to control the unseen test environment error in the representation learning, which highlights the importance of controlling the smoothness of representation. In practice, our analysis further inspires an efficient regularization method to improve the robustness in domain generalization. The proposed regularization is orthogonal to and can be straightforwardly adopted in existing domain generalization algorithms that ensure invariant representation learning. Empirical results show that our algorithm outperforms the base versions in various datasets and invariance criteria. Springer US 2022-01-01 2022 /pmc/articles/PMC9012768/ /pubmed/35510180 http://dx.doi.org/10.1007/s10994-021-06080-w Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Shui, Changjian
Wang, Boyu
Gagné, Christian
On the benefits of representation regularization in invariance based domain generalization
title On the benefits of representation regularization in invariance based domain generalization
title_full On the benefits of representation regularization in invariance based domain generalization
title_fullStr On the benefits of representation regularization in invariance based domain generalization
title_full_unstemmed On the benefits of representation regularization in invariance based domain generalization
title_short On the benefits of representation regularization in invariance based domain generalization
title_sort on the benefits of representation regularization in invariance based domain generalization
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9012768/
https://www.ncbi.nlm.nih.gov/pubmed/35510180
http://dx.doi.org/10.1007/s10994-021-06080-w
work_keys_str_mv AT shuichangjian onthebenefitsofrepresentationregularizationininvariancebaseddomaingeneralization
AT wangboyu onthebenefitsofrepresentationregularizationininvariancebaseddomaingeneralization
AT gagnechristian onthebenefitsofrepresentationregularizationininvariancebaseddomaingeneralization