Cargando…

Optimization by Adaptive Stochastic Descent

When standard optimization methods fail to find a satisfactory solution for a parameter fitting problem, a tempting recourse is to adjust parameters manually. While tedious, this approach can be surprisingly powerful in terms of achieving optimal or near-optimal solutions. This paper outlines an opt...

Descripción completa

Detalles Bibliográficos
Autores principales: Kerr, Cliff C., Dura-Bernal, Salvador, Smolinski, Tomasz G., Chadderdon, George L., Wilson, David P.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5856269/
https://www.ncbi.nlm.nih.gov/pubmed/29547665
http://dx.doi.org/10.1371/journal.pone.0192944
_version_ 1783307274472652800
author Kerr, Cliff C.
Dura-Bernal, Salvador
Smolinski, Tomasz G.
Chadderdon, George L.
Wilson, David P.
author_facet Kerr, Cliff C.
Dura-Bernal, Salvador
Smolinski, Tomasz G.
Chadderdon, George L.
Wilson, David P.
author_sort Kerr, Cliff C.
collection PubMed
description When standard optimization methods fail to find a satisfactory solution for a parameter fitting problem, a tempting recourse is to adjust parameters manually. While tedious, this approach can be surprisingly powerful in terms of achieving optimal or near-optimal solutions. This paper outlines an optimization algorithm, Adaptive Stochastic Descent (ASD), that has been designed to replicate the essential aspects of manual parameter fitting in an automated way. Specifically, ASD uses simple principles to form probabilistic assumptions about (a) which parameters have the greatest effect on the objective function, and (b) optimal step sizes for each parameter. We show that for a certain class of optimization problems (namely, those with a moderate to large number of scalar parameter dimensions, especially if some dimensions are more important than others), ASD is capable of minimizing the objective function with far fewer function evaluations than classic optimization methods, such as the Nelder-Mead nonlinear simplex, Levenberg-Marquardt gradient descent, simulated annealing, and genetic algorithms. As a case study, we show that ASD outperforms standard algorithms when used to determine how resources should be allocated in order to minimize new HIV infections in Swaziland.
format Online
Article
Text
id pubmed-5856269
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-58562692018-03-28 Optimization by Adaptive Stochastic Descent Kerr, Cliff C. Dura-Bernal, Salvador Smolinski, Tomasz G. Chadderdon, George L. Wilson, David P. PLoS One Research Article When standard optimization methods fail to find a satisfactory solution for a parameter fitting problem, a tempting recourse is to adjust parameters manually. While tedious, this approach can be surprisingly powerful in terms of achieving optimal or near-optimal solutions. This paper outlines an optimization algorithm, Adaptive Stochastic Descent (ASD), that has been designed to replicate the essential aspects of manual parameter fitting in an automated way. Specifically, ASD uses simple principles to form probabilistic assumptions about (a) which parameters have the greatest effect on the objective function, and (b) optimal step sizes for each parameter. We show that for a certain class of optimization problems (namely, those with a moderate to large number of scalar parameter dimensions, especially if some dimensions are more important than others), ASD is capable of minimizing the objective function with far fewer function evaluations than classic optimization methods, such as the Nelder-Mead nonlinear simplex, Levenberg-Marquardt gradient descent, simulated annealing, and genetic algorithms. As a case study, we show that ASD outperforms standard algorithms when used to determine how resources should be allocated in order to minimize new HIV infections in Swaziland. Public Library of Science 2018-03-16 /pmc/articles/PMC5856269/ /pubmed/29547665 http://dx.doi.org/10.1371/journal.pone.0192944 Text en © 2018 Kerr et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Kerr, Cliff C.
Dura-Bernal, Salvador
Smolinski, Tomasz G.
Chadderdon, George L.
Wilson, David P.
Optimization by Adaptive Stochastic Descent
title Optimization by Adaptive Stochastic Descent
title_full Optimization by Adaptive Stochastic Descent
title_fullStr Optimization by Adaptive Stochastic Descent
title_full_unstemmed Optimization by Adaptive Stochastic Descent
title_short Optimization by Adaptive Stochastic Descent
title_sort optimization by adaptive stochastic descent
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5856269/
https://www.ncbi.nlm.nih.gov/pubmed/29547665
http://dx.doi.org/10.1371/journal.pone.0192944
work_keys_str_mv AT kerrcliffc optimizationbyadaptivestochasticdescent
AT durabernalsalvador optimizationbyadaptivestochasticdescent
AT smolinskitomaszg optimizationbyadaptivestochasticdescent
AT chadderdongeorgel optimizationbyadaptivestochasticdescent
AT wilsondavidp optimizationbyadaptivestochasticdescent