Cargando…

Optimal training of integer-valued neural networks with mixed integer programming

Recent work has shown potential in using Mixed Integer Programming (MIP) solvers to optimize certain aspects of neural networks (NNs). However the intriguing approach of training NNs with MIP solvers is under-explored. State-of-the-art-methods to train NNs are typically gradient-based and require si...

Descripción completa

Detalles Bibliográficos
Autores principales: Thorbjarnarson, Tómas, Yorke-Smith, Neil
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9891529/
https://www.ncbi.nlm.nih.gov/pubmed/36724189
http://dx.doi.org/10.1371/journal.pone.0261029
_version_ 1784881152097517568
author Thorbjarnarson, Tómas
Yorke-Smith, Neil
author_facet Thorbjarnarson, Tómas
Yorke-Smith, Neil
author_sort Thorbjarnarson, Tómas
collection PubMed
description Recent work has shown potential in using Mixed Integer Programming (MIP) solvers to optimize certain aspects of neural networks (NNs). However the intriguing approach of training NNs with MIP solvers is under-explored. State-of-the-art-methods to train NNs are typically gradient-based and require significant data, computation on GPUs, and extensive hyper-parameter tuning. In contrast, training with MIP solvers does not require GPUs or heavy hyper-parameter tuning, but currently cannot handle anything but small amounts of data. This article builds on recent advances that train binarized NNs using MIP solvers. We go beyond current work by formulating new MIP models which improve training efficiency and which can train the important class of integer-valued neural networks (INNs). We provide two novel methods to further the potential significance of using MIP to train NNs. The first method optimizes the number of neurons in the NN while training. This reduces the need for deciding on network architecture before training. The second method addresses the amount of training data which MIP can feasibly handle: we provide a batch training method that dramatically increases the amount of data that MIP solvers can use to train. We thus provide a promising step towards using much more data than before when training NNs using MIP models. Experimental results on two real-world data-limited datasets demonstrate that our approach strongly outperforms the previous state of the art in training NN with MIP, in terms of accuracy, training time and amount of data. Our methodology is proficient at training NNs when minimal training data is available, and at training with minimal memory requirements—which is potentially valuable for deploying to low-memory devices.
format Online
Article
Text
id pubmed-9891529
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-98915292023-02-02 Optimal training of integer-valued neural networks with mixed integer programming Thorbjarnarson, Tómas Yorke-Smith, Neil PLoS One Research Article Recent work has shown potential in using Mixed Integer Programming (MIP) solvers to optimize certain aspects of neural networks (NNs). However the intriguing approach of training NNs with MIP solvers is under-explored. State-of-the-art-methods to train NNs are typically gradient-based and require significant data, computation on GPUs, and extensive hyper-parameter tuning. In contrast, training with MIP solvers does not require GPUs or heavy hyper-parameter tuning, but currently cannot handle anything but small amounts of data. This article builds on recent advances that train binarized NNs using MIP solvers. We go beyond current work by formulating new MIP models which improve training efficiency and which can train the important class of integer-valued neural networks (INNs). We provide two novel methods to further the potential significance of using MIP to train NNs. The first method optimizes the number of neurons in the NN while training. This reduces the need for deciding on network architecture before training. The second method addresses the amount of training data which MIP can feasibly handle: we provide a batch training method that dramatically increases the amount of data that MIP solvers can use to train. We thus provide a promising step towards using much more data than before when training NNs using MIP models. Experimental results on two real-world data-limited datasets demonstrate that our approach strongly outperforms the previous state of the art in training NN with MIP, in terms of accuracy, training time and amount of data. Our methodology is proficient at training NNs when minimal training data is available, and at training with minimal memory requirements—which is potentially valuable for deploying to low-memory devices. Public Library of Science 2023-02-01 /pmc/articles/PMC9891529/ /pubmed/36724189 http://dx.doi.org/10.1371/journal.pone.0261029 Text en © 2023 Thorbjarnarson, Yorke-Smith https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Thorbjarnarson, Tómas
Yorke-Smith, Neil
Optimal training of integer-valued neural networks with mixed integer programming
title Optimal training of integer-valued neural networks with mixed integer programming
title_full Optimal training of integer-valued neural networks with mixed integer programming
title_fullStr Optimal training of integer-valued neural networks with mixed integer programming
title_full_unstemmed Optimal training of integer-valued neural networks with mixed integer programming
title_short Optimal training of integer-valued neural networks with mixed integer programming
title_sort optimal training of integer-valued neural networks with mixed integer programming
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9891529/
https://www.ncbi.nlm.nih.gov/pubmed/36724189
http://dx.doi.org/10.1371/journal.pone.0261029
work_keys_str_mv AT thorbjarnarsontomas optimaltrainingofintegervaluedneuralnetworkswithmixedintegerprogramming
AT yorkesmithneil optimaltrainingofintegervaluedneuralnetworkswithmixedintegerprogramming