Cargando…

Semi-Supervised Minimum Error Entropy Principle with Distributed Method

The minimum error entropy principle (MEE) is an alternative of the classical least squares for its robustness to non-Gaussian noise. This paper studies the gradient descent algorithm for MEE with a semi-supervised approach and distributed method, and shows that using the additional information of un...

Descripción completa

Detalles Bibliográficos
Autores principales: Wang, Baobin, Hu, Ting
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7512566/
https://www.ncbi.nlm.nih.gov/pubmed/33266692
http://dx.doi.org/10.3390/e20120968
_version_ 1783586187889344512
author Wang, Baobin
Hu, Ting
author_facet Wang, Baobin
Hu, Ting
author_sort Wang, Baobin
collection PubMed
description The minimum error entropy principle (MEE) is an alternative of the classical least squares for its robustness to non-Gaussian noise. This paper studies the gradient descent algorithm for MEE with a semi-supervised approach and distributed method, and shows that using the additional information of unlabeled data can enhance the learning ability of the distributed MEE algorithm. Our result proves that the mean squared error of the distributed gradient descent MEE algorithm can be minimax optimal for regression if the number of local machines increases polynomially as the total datasize.
format Online
Article
Text
id pubmed-7512566
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-75125662020-11-09 Semi-Supervised Minimum Error Entropy Principle with Distributed Method Wang, Baobin Hu, Ting Entropy (Basel) Article The minimum error entropy principle (MEE) is an alternative of the classical least squares for its robustness to non-Gaussian noise. This paper studies the gradient descent algorithm for MEE with a semi-supervised approach and distributed method, and shows that using the additional information of unlabeled data can enhance the learning ability of the distributed MEE algorithm. Our result proves that the mean squared error of the distributed gradient descent MEE algorithm can be minimax optimal for regression if the number of local machines increases polynomially as the total datasize. MDPI 2018-12-14 /pmc/articles/PMC7512566/ /pubmed/33266692 http://dx.doi.org/10.3390/e20120968 Text en © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Wang, Baobin
Hu, Ting
Semi-Supervised Minimum Error Entropy Principle with Distributed Method
title Semi-Supervised Minimum Error Entropy Principle with Distributed Method
title_full Semi-Supervised Minimum Error Entropy Principle with Distributed Method
title_fullStr Semi-Supervised Minimum Error Entropy Principle with Distributed Method
title_full_unstemmed Semi-Supervised Minimum Error Entropy Principle with Distributed Method
title_short Semi-Supervised Minimum Error Entropy Principle with Distributed Method
title_sort semi-supervised minimum error entropy principle with distributed method
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7512566/
https://www.ncbi.nlm.nih.gov/pubmed/33266692
http://dx.doi.org/10.3390/e20120968
work_keys_str_mv AT wangbaobin semisupervisedminimumerrorentropyprinciplewithdistributedmethod
AT huting semisupervisedminimumerrorentropyprinciplewithdistributedmethod