Cargando…
Hessian-Regularized Co-Training for Social Activity Recognition
Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two distinct views and maximizes the mutual agreement on the two-view unlabeled data. Traditional co-training algorithms usually train a learner on each view separately and then force the learners to be co...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2014
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4178174/ https://www.ncbi.nlm.nih.gov/pubmed/25259945 http://dx.doi.org/10.1371/journal.pone.0108474 |
_version_ | 1782336903668826112 |
---|---|
author | Liu, Weifeng Li, Yang Lin, Xu Tao, Dacheng Wang, Yanjiang |
author_facet | Liu, Weifeng Li, Yang Lin, Xu Tao, Dacheng Wang, Yanjiang |
author_sort | Liu, Weifeng |
collection | PubMed |
description | Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two distinct views and maximizes the mutual agreement on the two-view unlabeled data. Traditional co-training algorithms usually train a learner on each view separately and then force the learners to be consistent across views. Although many co-trainings have been developed, it is quite possible that a learner will receive erroneous labels for unlabeled data when the other learner has only mediocre accuracy. This usually happens in the first rounds of co-training, when there are only a few labeled examples. As a result, co-training algorithms often have unstable performance. In this paper, Hessian-regularized co-training is proposed to overcome these limitations. Specifically, each Hessian is obtained from a particular view of examples; Hessian regularization is then integrated into the learner training process of each view by penalizing the regression function along the potential manifold. Hessian can properly exploit the local structure of the underlying data manifold. Hessian regularization significantly boosts the generalizability of a classifier, especially when there are a small number of labeled examples and a large number of unlabeled examples. To evaluate the proposed method, extensive experiments were conducted on the unstructured social activity attribute (USAA) dataset for social activity recognition. Our results demonstrate that the proposed method outperforms baseline methods, including the traditional co-training and LapCo algorithms. |
format | Online Article Text |
id | pubmed-4178174 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2014 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-41781742014-10-02 Hessian-Regularized Co-Training for Social Activity Recognition Liu, Weifeng Li, Yang Lin, Xu Tao, Dacheng Wang, Yanjiang PLoS One Research Article Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two distinct views and maximizes the mutual agreement on the two-view unlabeled data. Traditional co-training algorithms usually train a learner on each view separately and then force the learners to be consistent across views. Although many co-trainings have been developed, it is quite possible that a learner will receive erroneous labels for unlabeled data when the other learner has only mediocre accuracy. This usually happens in the first rounds of co-training, when there are only a few labeled examples. As a result, co-training algorithms often have unstable performance. In this paper, Hessian-regularized co-training is proposed to overcome these limitations. Specifically, each Hessian is obtained from a particular view of examples; Hessian regularization is then integrated into the learner training process of each view by penalizing the regression function along the potential manifold. Hessian can properly exploit the local structure of the underlying data manifold. Hessian regularization significantly boosts the generalizability of a classifier, especially when there are a small number of labeled examples and a large number of unlabeled examples. To evaluate the proposed method, extensive experiments were conducted on the unstructured social activity attribute (USAA) dataset for social activity recognition. Our results demonstrate that the proposed method outperforms baseline methods, including the traditional co-training and LapCo algorithms. Public Library of Science 2014-09-26 /pmc/articles/PMC4178174/ /pubmed/25259945 http://dx.doi.org/10.1371/journal.pone.0108474 Text en © 2014 Liu et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited. |
spellingShingle | Research Article Liu, Weifeng Li, Yang Lin, Xu Tao, Dacheng Wang, Yanjiang Hessian-Regularized Co-Training for Social Activity Recognition |
title | Hessian-Regularized Co-Training for Social Activity Recognition |
title_full | Hessian-Regularized Co-Training for Social Activity Recognition |
title_fullStr | Hessian-Regularized Co-Training for Social Activity Recognition |
title_full_unstemmed | Hessian-Regularized Co-Training for Social Activity Recognition |
title_short | Hessian-Regularized Co-Training for Social Activity Recognition |
title_sort | hessian-regularized co-training for social activity recognition |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4178174/ https://www.ncbi.nlm.nih.gov/pubmed/25259945 http://dx.doi.org/10.1371/journal.pone.0108474 |
work_keys_str_mv | AT liuweifeng hessianregularizedcotrainingforsocialactivityrecognition AT liyang hessianregularizedcotrainingforsocialactivityrecognition AT linxu hessianregularizedcotrainingforsocialactivityrecognition AT taodacheng hessianregularizedcotrainingforsocialactivityrecognition AT wangyanjiang hessianregularizedcotrainingforsocialactivityrecognition |