Cargando…

On Robustness of Neural Architecture Search Under Label Noise

Neural architecture search (NAS), which aims at automatically seeking proper neural architectures given a specific task, has attracted extensive attention recently in supervised learning applications. In most real-world situations, the class labels provided in the training data would be noisy due to...

Descripción completa

Detalles Bibliográficos
Autores principales: Chen, Yi-Wei, Song, Qingquan, Liu, Xi, Sastry, P. S., Hu, Xia
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7931895/
https://www.ncbi.nlm.nih.gov/pubmed/33693377
http://dx.doi.org/10.3389/fdata.2020.00002
_version_ 1783660377304727552
author Chen, Yi-Wei
Song, Qingquan
Liu, Xi
Sastry, P. S.
Hu, Xia
author_facet Chen, Yi-Wei
Song, Qingquan
Liu, Xi
Sastry, P. S.
Hu, Xia
author_sort Chen, Yi-Wei
collection PubMed
description Neural architecture search (NAS), which aims at automatically seeking proper neural architectures given a specific task, has attracted extensive attention recently in supervised learning applications. In most real-world situations, the class labels provided in the training data would be noisy due to many reasons, such as subjective judgments, inadequate information, and random human errors. Existing work has demonstrated the adverse effects of label noise on the learning of weights of neural networks. These effects could become more critical in NAS since the architectures are not only trained with noisy labels but are also compared based on their performances on noisy validation sets. In this paper, we systematically explore the robustness of NAS under label noise. We show that label noise in the training and/or validation data can lead to various degrees of performance variations. Through empirical experiments, using robust loss functions can mitigate the performance degradation under symmetric label noise as well as under a simple model of class conditional label noise. We also provide a theoretical justification for this. Both empirical and theoretical results provide a strong argument in favor of employing the robust loss function in NAS under high-level noise.
format Online
Article
Text
id pubmed-7931895
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-79318952021-03-09 On Robustness of Neural Architecture Search Under Label Noise Chen, Yi-Wei Song, Qingquan Liu, Xi Sastry, P. S. Hu, Xia Front Big Data Big Data Neural architecture search (NAS), which aims at automatically seeking proper neural architectures given a specific task, has attracted extensive attention recently in supervised learning applications. In most real-world situations, the class labels provided in the training data would be noisy due to many reasons, such as subjective judgments, inadequate information, and random human errors. Existing work has demonstrated the adverse effects of label noise on the learning of weights of neural networks. These effects could become more critical in NAS since the architectures are not only trained with noisy labels but are also compared based on their performances on noisy validation sets. In this paper, we systematically explore the robustness of NAS under label noise. We show that label noise in the training and/or validation data can lead to various degrees of performance variations. Through empirical experiments, using robust loss functions can mitigate the performance degradation under symmetric label noise as well as under a simple model of class conditional label noise. We also provide a theoretical justification for this. Both empirical and theoretical results provide a strong argument in favor of employing the robust loss function in NAS under high-level noise. Frontiers Media S.A. 2020-02-11 /pmc/articles/PMC7931895/ /pubmed/33693377 http://dx.doi.org/10.3389/fdata.2020.00002 Text en Copyright © 2020 Chen, Song, Liu, Sastry and Hu. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Big Data
Chen, Yi-Wei
Song, Qingquan
Liu, Xi
Sastry, P. S.
Hu, Xia
On Robustness of Neural Architecture Search Under Label Noise
title On Robustness of Neural Architecture Search Under Label Noise
title_full On Robustness of Neural Architecture Search Under Label Noise
title_fullStr On Robustness of Neural Architecture Search Under Label Noise
title_full_unstemmed On Robustness of Neural Architecture Search Under Label Noise
title_short On Robustness of Neural Architecture Search Under Label Noise
title_sort on robustness of neural architecture search under label noise
topic Big Data
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7931895/
https://www.ncbi.nlm.nih.gov/pubmed/33693377
http://dx.doi.org/10.3389/fdata.2020.00002
work_keys_str_mv AT chenyiwei onrobustnessofneuralarchitecturesearchunderlabelnoise
AT songqingquan onrobustnessofneuralarchitecturesearchunderlabelnoise
AT liuxi onrobustnessofneuralarchitecturesearchunderlabelnoise
AT sastryps onrobustnessofneuralarchitecturesearchunderlabelnoise
AT huxia onrobustnessofneuralarchitecturesearchunderlabelnoise