Cargando…

Graph-Based Self-Training for Semi-Supervised Deep Similarity Learning

Semi-supervised learning is a learning pattern that can utilize labeled data and unlabeled data to train deep neural networks. In semi-supervised learning methods, self-training-based methods do not depend on a data augmentation strategy and have better generalization ability. However, their perform...

Descripción completa

Detalles Bibliográficos
Autores principales: Wang, Yifan, Huang, Yan, Wang, Qicong, Zhao, Chong, Zhang, Zhenchang, Chen, Jian
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10145307/
https://www.ncbi.nlm.nih.gov/pubmed/37112285
http://dx.doi.org/10.3390/s23083944
Descripción
Sumario:Semi-supervised learning is a learning pattern that can utilize labeled data and unlabeled data to train deep neural networks. In semi-supervised learning methods, self-training-based methods do not depend on a data augmentation strategy and have better generalization ability. However, their performance is limited by the accuracy of predicted pseudo-labels. In this paper, we propose to reduce the noise in the pseudo-labels from two aspects: the accuracy of predictions and the confidence of the predictions. For the first aspect, we propose a similarity graph structure learning (SGSL) model that considers the correlation between unlabeled and labeled samples, which facilitates the learning of more discriminative features and, thus, obtains more accurate predictions. For the second aspect, we propose an uncertainty-based graph convolutional network (UGCN), which can aggregate similar features based on the learned graph structure in the training phase, making the features more discriminative. It can also output the uncertainty of predictions in the pseudo-label generation phase, generating pseudo-labels only for unlabeled samples with low uncertainty; thus, reducing the noise in the pseudo-labels. Further, a positive and negative self-training framework is proposed, which combines the proposed SGSL model and UGCN into the self-training framework for end-to-end training. In addition, in order to introduce more supervised signals in the self-training process, negative pseudo-labels are generated for unlabeled samples with low prediction confidence, and then the positive and negative pseudo-labeled samples are trained together with a small number of labeled samples to improve the performance of semi-supervised learning. The code is available upon request.