Cargando…
GAN-Based Image Colorization for Self-Supervised Visual Feature Learning
Large-scale labeled datasets are generally necessary for successfully training a deep neural network in the computer vision domain. In order to avoid the costly and tedious work of manually annotating image datasets, self-supervised learning methods have been proposed to learn general visual feature...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8880520/ https://www.ncbi.nlm.nih.gov/pubmed/35214498 http://dx.doi.org/10.3390/s22041599 |
_version_ | 1784659224390795264 |
---|---|
author | Treneska, Sandra Zdravevski, Eftim Pires, Ivan Miguel Lameski, Petre Gievska, Sonja |
author_facet | Treneska, Sandra Zdravevski, Eftim Pires, Ivan Miguel Lameski, Petre Gievska, Sonja |
author_sort | Treneska, Sandra |
collection | PubMed |
description | Large-scale labeled datasets are generally necessary for successfully training a deep neural network in the computer vision domain. In order to avoid the costly and tedious work of manually annotating image datasets, self-supervised learning methods have been proposed to learn general visual features automatically. In this paper, we first focus on image colorization with generative adversarial networks (GANs) because of their ability to generate the most realistic colorization results. Then, via transfer learning, we use this as a proxy task for visual understanding. Particularly, we propose to use conditional GANs (cGANs) for image colorization and transfer the gained knowledge to two other downstream tasks, namely, multilabel image classification and semantic segmentation. This is the first time that GANs have been used for self-supervised feature learning through image colorization. Through extensive experiments with the COCO and Pascal datasets, we show an increase of 5% for the classification task and 2.5% for the segmentation task. This demonstrates that image colorization with conditional GANs can boost other downstream tasks’ performance without the need for manual annotation. |
format | Online Article Text |
id | pubmed-8880520 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-88805202022-02-26 GAN-Based Image Colorization for Self-Supervised Visual Feature Learning Treneska, Sandra Zdravevski, Eftim Pires, Ivan Miguel Lameski, Petre Gievska, Sonja Sensors (Basel) Article Large-scale labeled datasets are generally necessary for successfully training a deep neural network in the computer vision domain. In order to avoid the costly and tedious work of manually annotating image datasets, self-supervised learning methods have been proposed to learn general visual features automatically. In this paper, we first focus on image colorization with generative adversarial networks (GANs) because of their ability to generate the most realistic colorization results. Then, via transfer learning, we use this as a proxy task for visual understanding. Particularly, we propose to use conditional GANs (cGANs) for image colorization and transfer the gained knowledge to two other downstream tasks, namely, multilabel image classification and semantic segmentation. This is the first time that GANs have been used for self-supervised feature learning through image colorization. Through extensive experiments with the COCO and Pascal datasets, we show an increase of 5% for the classification task and 2.5% for the segmentation task. This demonstrates that image colorization with conditional GANs can boost other downstream tasks’ performance without the need for manual annotation. MDPI 2022-02-18 /pmc/articles/PMC8880520/ /pubmed/35214498 http://dx.doi.org/10.3390/s22041599 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Treneska, Sandra Zdravevski, Eftim Pires, Ivan Miguel Lameski, Petre Gievska, Sonja GAN-Based Image Colorization for Self-Supervised Visual Feature Learning |
title | GAN-Based Image Colorization for Self-Supervised Visual Feature Learning |
title_full | GAN-Based Image Colorization for Self-Supervised Visual Feature Learning |
title_fullStr | GAN-Based Image Colorization for Self-Supervised Visual Feature Learning |
title_full_unstemmed | GAN-Based Image Colorization for Self-Supervised Visual Feature Learning |
title_short | GAN-Based Image Colorization for Self-Supervised Visual Feature Learning |
title_sort | gan-based image colorization for self-supervised visual feature learning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8880520/ https://www.ncbi.nlm.nih.gov/pubmed/35214498 http://dx.doi.org/10.3390/s22041599 |
work_keys_str_mv | AT treneskasandra ganbasedimagecolorizationforselfsupervisedvisualfeaturelearning AT zdravevskieftim ganbasedimagecolorizationforselfsupervisedvisualfeaturelearning AT piresivanmiguel ganbasedimagecolorizationforselfsupervisedvisualfeaturelearning AT lameskipetre ganbasedimagecolorizationforselfsupervisedvisualfeaturelearning AT gievskasonja ganbasedimagecolorizationforselfsupervisedvisualfeaturelearning |