Cargando…

Incremental and Parallel Machine Learning Algorithms With Automated Learning Rate Adjustments

The existing machine learning algorithms for minimizing the convex function over a closed convex set suffer from slow convergence because their learning rates must be determined before running them. This paper proposes two machine learning algorithms incorporating the line search method, which autom...

Descripción completa

Detalles Bibliográficos
Autores principales: Hishinuma, Kazuhiro, Iiduka, Hideaki
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7805887/
https://www.ncbi.nlm.nih.gov/pubmed/33501092
http://dx.doi.org/10.3389/frobt.2019.00077
_version_ 1783636404555743232
author Hishinuma, Kazuhiro
Iiduka, Hideaki
author_facet Hishinuma, Kazuhiro
Iiduka, Hideaki
author_sort Hishinuma, Kazuhiro
collection PubMed
description The existing machine learning algorithms for minimizing the convex function over a closed convex set suffer from slow convergence because their learning rates must be determined before running them. This paper proposes two machine learning algorithms incorporating the line search method, which automatically and algorithmically finds appropriate learning rates at run-time. One algorithm is based on the incremental subgradient algorithm, which sequentially and cyclically uses each of the parts of the objective function; the other is based on the parallel subgradient algorithm, which uses parts independently in parallel. These algorithms can be applied to constrained nonsmooth convex optimization problems appearing in tasks of learning support vector machines without adjusting the learning rates precisely. The proposed line search method can determine learning rates to satisfy weaker conditions than the ones used in the existing machine learning algorithms. This implies that the two algorithms are generalizations of the existing incremental and parallel subgradient algorithms for solving constrained nonsmooth convex optimization problems. We show that they generate sequences that converge to a solution of the constrained nonsmooth convex optimization problem under certain conditions. The main contribution of this paper is the provision of three kinds of experiment showing that the two algorithms can solve concrete experimental problems faster than the existing algorithms. First, we show that the proposed algorithms have performance advantages over the existing ones in solving a test problem. Second, we compare the proposed algorithms with a different algorithm Pegasos, which is designed to learn with a support vector machine efficiently, in terms of prediction accuracy, value of the objective function, and computational time. Finally, we use one of our algorithms to train a multilayer neural network and discuss its applicability to deep learning.
format Online
Article
Text
id pubmed-7805887
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-78058872021-01-25 Incremental and Parallel Machine Learning Algorithms With Automated Learning Rate Adjustments Hishinuma, Kazuhiro Iiduka, Hideaki Front Robot AI Robotics and AI The existing machine learning algorithms for minimizing the convex function over a closed convex set suffer from slow convergence because their learning rates must be determined before running them. This paper proposes two machine learning algorithms incorporating the line search method, which automatically and algorithmically finds appropriate learning rates at run-time. One algorithm is based on the incremental subgradient algorithm, which sequentially and cyclically uses each of the parts of the objective function; the other is based on the parallel subgradient algorithm, which uses parts independently in parallel. These algorithms can be applied to constrained nonsmooth convex optimization problems appearing in tasks of learning support vector machines without adjusting the learning rates precisely. The proposed line search method can determine learning rates to satisfy weaker conditions than the ones used in the existing machine learning algorithms. This implies that the two algorithms are generalizations of the existing incremental and parallel subgradient algorithms for solving constrained nonsmooth convex optimization problems. We show that they generate sequences that converge to a solution of the constrained nonsmooth convex optimization problem under certain conditions. The main contribution of this paper is the provision of three kinds of experiment showing that the two algorithms can solve concrete experimental problems faster than the existing algorithms. First, we show that the proposed algorithms have performance advantages over the existing ones in solving a test problem. Second, we compare the proposed algorithms with a different algorithm Pegasos, which is designed to learn with a support vector machine efficiently, in terms of prediction accuracy, value of the objective function, and computational time. Finally, we use one of our algorithms to train a multilayer neural network and discuss its applicability to deep learning. Frontiers Media S.A. 2019-08-27 /pmc/articles/PMC7805887/ /pubmed/33501092 http://dx.doi.org/10.3389/frobt.2019.00077 Text en Copyright © 2019 Hishinuma and Iiduka. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Hishinuma, Kazuhiro
Iiduka, Hideaki
Incremental and Parallel Machine Learning Algorithms With Automated Learning Rate Adjustments
title Incremental and Parallel Machine Learning Algorithms With Automated Learning Rate Adjustments
title_full Incremental and Parallel Machine Learning Algorithms With Automated Learning Rate Adjustments
title_fullStr Incremental and Parallel Machine Learning Algorithms With Automated Learning Rate Adjustments
title_full_unstemmed Incremental and Parallel Machine Learning Algorithms With Automated Learning Rate Adjustments
title_short Incremental and Parallel Machine Learning Algorithms With Automated Learning Rate Adjustments
title_sort incremental and parallel machine learning algorithms with automated learning rate adjustments
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7805887/
https://www.ncbi.nlm.nih.gov/pubmed/33501092
http://dx.doi.org/10.3389/frobt.2019.00077
work_keys_str_mv AT hishinumakazuhiro incrementalandparallelmachinelearningalgorithmswithautomatedlearningrateadjustments
AT iidukahideaki incrementalandparallelmachinelearningalgorithmswithautomatedlearningrateadjustments