Cargando…

Kernel Risk-Sensitive Mean p-Power Error Algorithms for Robust Learning

As a nonlinear similarity measure defined in the reproducing kernel Hilbert space (RKHS), the correntropic loss (C-Loss) has been widely applied in robust learning and signal processing. However, the highly non-convex nature of C-Loss results in performance degradation. To address this issue, a conv...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Tao, Wang, Shiyuan, Zhang, Haonan, Xiong, Kui, Wang, Lin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7515077/
https://www.ncbi.nlm.nih.gov/pubmed/33267302
http://dx.doi.org/10.3390/e21060588
_version_ 1783586735428468736
author Zhang, Tao
Wang, Shiyuan
Zhang, Haonan
Xiong, Kui
Wang, Lin
author_facet Zhang, Tao
Wang, Shiyuan
Zhang, Haonan
Xiong, Kui
Wang, Lin
author_sort Zhang, Tao
collection PubMed
description As a nonlinear similarity measure defined in the reproducing kernel Hilbert space (RKHS), the correntropic loss (C-Loss) has been widely applied in robust learning and signal processing. However, the highly non-convex nature of C-Loss results in performance degradation. To address this issue, a convex kernel risk-sensitive loss (KRL) is proposed to measure the similarity in RKHS, which is the risk-sensitive loss defined as the expectation of an exponential function of the squared estimation error. In this paper, a novel nonlinear similarity measure, namely kernel risk-sensitive mean p-power error (KRP), is proposed by combining the mean p-power error into the KRL, which is a generalization of the KRL measure. The KRP with [Formula: see text] reduces to the KRL, and can outperform the KRL when an appropriate p is configured in robust learning. Some properties of KRP are presented for discussion. To improve the robustness of the kernel recursive least squares algorithm (KRLS) and reduce its network size, two robust recursive kernel adaptive filters, namely recursive minimum kernel risk-sensitive mean p-power error algorithm (RMKRP) and its quantized RMKRP (QRMKRP), are proposed in the RKHS under the minimum kernel risk-sensitive mean p-power error (MKRP) criterion, respectively. Monte Carlo simulations are conducted to confirm the superiorities of the proposed RMKRP and its quantized version.
format Online
Article
Text
id pubmed-7515077
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-75150772020-11-09 Kernel Risk-Sensitive Mean p-Power Error Algorithms for Robust Learning Zhang, Tao Wang, Shiyuan Zhang, Haonan Xiong, Kui Wang, Lin Entropy (Basel) Article As a nonlinear similarity measure defined in the reproducing kernel Hilbert space (RKHS), the correntropic loss (C-Loss) has been widely applied in robust learning and signal processing. However, the highly non-convex nature of C-Loss results in performance degradation. To address this issue, a convex kernel risk-sensitive loss (KRL) is proposed to measure the similarity in RKHS, which is the risk-sensitive loss defined as the expectation of an exponential function of the squared estimation error. In this paper, a novel nonlinear similarity measure, namely kernel risk-sensitive mean p-power error (KRP), is proposed by combining the mean p-power error into the KRL, which is a generalization of the KRL measure. The KRP with [Formula: see text] reduces to the KRL, and can outperform the KRL when an appropriate p is configured in robust learning. Some properties of KRP are presented for discussion. To improve the robustness of the kernel recursive least squares algorithm (KRLS) and reduce its network size, two robust recursive kernel adaptive filters, namely recursive minimum kernel risk-sensitive mean p-power error algorithm (RMKRP) and its quantized RMKRP (QRMKRP), are proposed in the RKHS under the minimum kernel risk-sensitive mean p-power error (MKRP) criterion, respectively. Monte Carlo simulations are conducted to confirm the superiorities of the proposed RMKRP and its quantized version. MDPI 2019-06-13 /pmc/articles/PMC7515077/ /pubmed/33267302 http://dx.doi.org/10.3390/e21060588 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Zhang, Tao
Wang, Shiyuan
Zhang, Haonan
Xiong, Kui
Wang, Lin
Kernel Risk-Sensitive Mean p-Power Error Algorithms for Robust Learning
title Kernel Risk-Sensitive Mean p-Power Error Algorithms for Robust Learning
title_full Kernel Risk-Sensitive Mean p-Power Error Algorithms for Robust Learning
title_fullStr Kernel Risk-Sensitive Mean p-Power Error Algorithms for Robust Learning
title_full_unstemmed Kernel Risk-Sensitive Mean p-Power Error Algorithms for Robust Learning
title_short Kernel Risk-Sensitive Mean p-Power Error Algorithms for Robust Learning
title_sort kernel risk-sensitive mean p-power error algorithms for robust learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7515077/
https://www.ncbi.nlm.nih.gov/pubmed/33267302
http://dx.doi.org/10.3390/e21060588
work_keys_str_mv AT zhangtao kernelrisksensitivemeanppowererroralgorithmsforrobustlearning
AT wangshiyuan kernelrisksensitivemeanppowererroralgorithmsforrobustlearning
AT zhanghaonan kernelrisksensitivemeanppowererroralgorithmsforrobustlearning
AT xiongkui kernelrisksensitivemeanppowererroralgorithmsforrobustlearning
AT wanglin kernelrisksensitivemeanppowererroralgorithmsforrobustlearning