Cargando…
Domain randomization for neural network classification
Large data requirements are often the main hurdle in training neural networks. Convolutional neural network (CNN) classifiers in particular require tens of thousands of pre-labeled images per category to approach human-level accuracy, while often failing to generalized to out-of-domain test sets. Th...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer International Publishing
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8570318/ https://www.ncbi.nlm.nih.gov/pubmed/34760433 http://dx.doi.org/10.1186/s40537-021-00455-5 |
_version_ | 1784594817043398656 |
---|---|
author | Valtchev, Svetozar Zarko Wu, Jianhong |
author_facet | Valtchev, Svetozar Zarko Wu, Jianhong |
author_sort | Valtchev, Svetozar Zarko |
collection | PubMed |
description | Large data requirements are often the main hurdle in training neural networks. Convolutional neural network (CNN) classifiers in particular require tens of thousands of pre-labeled images per category to approach human-level accuracy, while often failing to generalized to out-of-domain test sets. The acquisition and labelling of such datasets is often an expensive, time consuming and tedious task in practice. Synthetic data provides a cheap and efficient solution to assemble such large datasets. Using domain randomization (DR), we show that a sufficiently well generated synthetic image dataset can be used to train a neural network classifier that rivals state-of-the-art models trained on real datasets, achieving accuracy levels as high as 88% on a baseline cats vs dogs classification task. We show that the most important domain randomization parameter is a large variety of subjects, while secondary parameters such as lighting and textures are found to be less significant to the model accuracy. Our results also provide evidence to suggest that models trained on domain randomized images transfer to new domains better than those trained on real photos. Model performance appears to remain stable as the number of categories increases. |
format | Online Article Text |
id | pubmed-8570318 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Springer International Publishing |
record_format | MEDLINE/PubMed |
spelling | pubmed-85703182021-11-08 Domain randomization for neural network classification Valtchev, Svetozar Zarko Wu, Jianhong J Big Data Research Large data requirements are often the main hurdle in training neural networks. Convolutional neural network (CNN) classifiers in particular require tens of thousands of pre-labeled images per category to approach human-level accuracy, while often failing to generalized to out-of-domain test sets. The acquisition and labelling of such datasets is often an expensive, time consuming and tedious task in practice. Synthetic data provides a cheap and efficient solution to assemble such large datasets. Using domain randomization (DR), we show that a sufficiently well generated synthetic image dataset can be used to train a neural network classifier that rivals state-of-the-art models trained on real datasets, achieving accuracy levels as high as 88% on a baseline cats vs dogs classification task. We show that the most important domain randomization parameter is a large variety of subjects, while secondary parameters such as lighting and textures are found to be less significant to the model accuracy. Our results also provide evidence to suggest that models trained on domain randomized images transfer to new domains better than those trained on real photos. Model performance appears to remain stable as the number of categories increases. Springer International Publishing 2021-07-02 2021 /pmc/articles/PMC8570318/ /pubmed/34760433 http://dx.doi.org/10.1186/s40537-021-00455-5 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Research Valtchev, Svetozar Zarko Wu, Jianhong Domain randomization for neural network classification |
title | Domain randomization for neural network classification |
title_full | Domain randomization for neural network classification |
title_fullStr | Domain randomization for neural network classification |
title_full_unstemmed | Domain randomization for neural network classification |
title_short | Domain randomization for neural network classification |
title_sort | domain randomization for neural network classification |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8570318/ https://www.ncbi.nlm.nih.gov/pubmed/34760433 http://dx.doi.org/10.1186/s40537-021-00455-5 |
work_keys_str_mv | AT valtchevsvetozarzarko domainrandomizationforneuralnetworkclassification AT wujianhong domainrandomizationforneuralnetworkclassification |