Cargando…
Parsimonious Optimization of Multitask Neural Network Hyperparameters
Neural networks are rapidly gaining popularity in chemical modeling and Quantitative Structure–Activity Relationship (QSAR) thanks to their ability to handle multitask problems. However, outcomes of neural networks depend on the tuning of several hyperparameters, whose small variations can often str...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8658836/ https://www.ncbi.nlm.nih.gov/pubmed/34885837 http://dx.doi.org/10.3390/molecules26237254 |
_version_ | 1784612822153428992 |
---|---|
author | Valsecchi, Cecile Consonni, Viviana Todeschini, Roberto Orlandi, Marco Emilio Gosetti, Fabio Ballabio, Davide |
author_facet | Valsecchi, Cecile Consonni, Viviana Todeschini, Roberto Orlandi, Marco Emilio Gosetti, Fabio Ballabio, Davide |
author_sort | Valsecchi, Cecile |
collection | PubMed |
description | Neural networks are rapidly gaining popularity in chemical modeling and Quantitative Structure–Activity Relationship (QSAR) thanks to their ability to handle multitask problems. However, outcomes of neural networks depend on the tuning of several hyperparameters, whose small variations can often strongly affect their performance. Hence, optimization is a fundamental step in training neural networks although, in many cases, it can be very expensive from a computational point of view. In this study, we compared four of the most widely used approaches for tuning hyperparameters, namely, grid search, random search, tree-structured Parzen estimator, and genetic algorithms on three multitask QSAR datasets. We mainly focused on parsimonious optimization and thus not only on the performance of neural networks, but also the computational time that was taken into account. Furthermore, since the optimization approaches do not directly provide information about the influence of hyperparameters, we applied experimental design strategies to determine their effects on the neural network performance. We found that genetic algorithms, tree-structured Parzen estimator, and random search require on average 0.08% of the hours required by grid search; in addition, tree-structured Parzen estimator and genetic algorithms provide better results than random search. |
format | Online Article Text |
id | pubmed-8658836 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-86588362021-12-10 Parsimonious Optimization of Multitask Neural Network Hyperparameters Valsecchi, Cecile Consonni, Viviana Todeschini, Roberto Orlandi, Marco Emilio Gosetti, Fabio Ballabio, Davide Molecules Article Neural networks are rapidly gaining popularity in chemical modeling and Quantitative Structure–Activity Relationship (QSAR) thanks to their ability to handle multitask problems. However, outcomes of neural networks depend on the tuning of several hyperparameters, whose small variations can often strongly affect their performance. Hence, optimization is a fundamental step in training neural networks although, in many cases, it can be very expensive from a computational point of view. In this study, we compared four of the most widely used approaches for tuning hyperparameters, namely, grid search, random search, tree-structured Parzen estimator, and genetic algorithms on three multitask QSAR datasets. We mainly focused on parsimonious optimization and thus not only on the performance of neural networks, but also the computational time that was taken into account. Furthermore, since the optimization approaches do not directly provide information about the influence of hyperparameters, we applied experimental design strategies to determine their effects on the neural network performance. We found that genetic algorithms, tree-structured Parzen estimator, and random search require on average 0.08% of the hours required by grid search; in addition, tree-structured Parzen estimator and genetic algorithms provide better results than random search. MDPI 2021-11-30 /pmc/articles/PMC8658836/ /pubmed/34885837 http://dx.doi.org/10.3390/molecules26237254 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Valsecchi, Cecile Consonni, Viviana Todeschini, Roberto Orlandi, Marco Emilio Gosetti, Fabio Ballabio, Davide Parsimonious Optimization of Multitask Neural Network Hyperparameters |
title | Parsimonious Optimization of Multitask Neural Network Hyperparameters |
title_full | Parsimonious Optimization of Multitask Neural Network Hyperparameters |
title_fullStr | Parsimonious Optimization of Multitask Neural Network Hyperparameters |
title_full_unstemmed | Parsimonious Optimization of Multitask Neural Network Hyperparameters |
title_short | Parsimonious Optimization of Multitask Neural Network Hyperparameters |
title_sort | parsimonious optimization of multitask neural network hyperparameters |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8658836/ https://www.ncbi.nlm.nih.gov/pubmed/34885837 http://dx.doi.org/10.3390/molecules26237254 |
work_keys_str_mv | AT valsecchicecile parsimoniousoptimizationofmultitaskneuralnetworkhyperparameters AT consonniviviana parsimoniousoptimizationofmultitaskneuralnetworkhyperparameters AT todeschiniroberto parsimoniousoptimizationofmultitaskneuralnetworkhyperparameters AT orlandimarcoemilio parsimoniousoptimizationofmultitaskneuralnetworkhyperparameters AT gosettifabio parsimoniousoptimizationofmultitaskneuralnetworkhyperparameters AT ballabiodavide parsimoniousoptimizationofmultitaskneuralnetworkhyperparameters |