Cargando…
Adaptive language model training for molecular design
The vast size of chemical space necessitates computational approaches to automate and accelerate the design of molecular sequences to guide experimental efforts for drug discovery. Genetic algorithms provide a useful framework to incrementally generate molecules by applying mutations to known chemic...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer International Publishing
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10249556/ https://www.ncbi.nlm.nih.gov/pubmed/37291633 http://dx.doi.org/10.1186/s13321-023-00719-7 |
_version_ | 1785055585008353280 |
---|---|
author | Blanchard, Andrew E. Bhowmik, Debsindhu Fox, Zachary Gounley, John Glaser, Jens Akpa, Belinda S. Irle, Stephan |
author_facet | Blanchard, Andrew E. Bhowmik, Debsindhu Fox, Zachary Gounley, John Glaser, Jens Akpa, Belinda S. Irle, Stephan |
author_sort | Blanchard, Andrew E. |
collection | PubMed |
description | The vast size of chemical space necessitates computational approaches to automate and accelerate the design of molecular sequences to guide experimental efforts for drug discovery. Genetic algorithms provide a useful framework to incrementally generate molecules by applying mutations to known chemical structures. Recently, masked language models have been applied to automate the mutation process by leveraging large compound libraries to learn commonly occurring chemical sequences (i.e., using tokenization) and predict rearrangements (i.e., using mask prediction). Here, we consider how language models can be adapted to improve molecule generation for different optimization tasks. We use two different generation strategies for comparison, fixed and adaptive. The fixed strategy uses a pre-trained model to generate mutations; the adaptive strategy trains the language model on each new generation of molecules selected for target properties during optimization. Our results show that the adaptive strategy allows the language model to more closely fit the distribution of molecules in the population. Therefore, for enhanced fitness optimization, we suggest the use of the fixed strategy during an initial phase followed by the use of the adaptive strategy. We demonstrate the impact of adaptive training by searching for molecules that optimize both heuristic metrics, drug-likeness and synthesizability, as well as predicted protein binding affinity from a surrogate model. Our results show that the adaptive strategy provides a significant improvement in fitness optimization compared to the fixed pre-trained model, empowering the application of language models to molecular design tasks. |
format | Online Article Text |
id | pubmed-10249556 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Springer International Publishing |
record_format | MEDLINE/PubMed |
spelling | pubmed-102495562023-06-10 Adaptive language model training for molecular design Blanchard, Andrew E. Bhowmik, Debsindhu Fox, Zachary Gounley, John Glaser, Jens Akpa, Belinda S. Irle, Stephan J Cheminform Research The vast size of chemical space necessitates computational approaches to automate and accelerate the design of molecular sequences to guide experimental efforts for drug discovery. Genetic algorithms provide a useful framework to incrementally generate molecules by applying mutations to known chemical structures. Recently, masked language models have been applied to automate the mutation process by leveraging large compound libraries to learn commonly occurring chemical sequences (i.e., using tokenization) and predict rearrangements (i.e., using mask prediction). Here, we consider how language models can be adapted to improve molecule generation for different optimization tasks. We use two different generation strategies for comparison, fixed and adaptive. The fixed strategy uses a pre-trained model to generate mutations; the adaptive strategy trains the language model on each new generation of molecules selected for target properties during optimization. Our results show that the adaptive strategy allows the language model to more closely fit the distribution of molecules in the population. Therefore, for enhanced fitness optimization, we suggest the use of the fixed strategy during an initial phase followed by the use of the adaptive strategy. We demonstrate the impact of adaptive training by searching for molecules that optimize both heuristic metrics, drug-likeness and synthesizability, as well as predicted protein binding affinity from a surrogate model. Our results show that the adaptive strategy provides a significant improvement in fitness optimization compared to the fixed pre-trained model, empowering the application of language models to molecular design tasks. Springer International Publishing 2023-06-08 /pmc/articles/PMC10249556/ /pubmed/37291633 http://dx.doi.org/10.1186/s13321-023-00719-7 Text en © UT-Battelle, LLC 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Blanchard, Andrew E. Bhowmik, Debsindhu Fox, Zachary Gounley, John Glaser, Jens Akpa, Belinda S. Irle, Stephan Adaptive language model training for molecular design |
title | Adaptive language model training for molecular design |
title_full | Adaptive language model training for molecular design |
title_fullStr | Adaptive language model training for molecular design |
title_full_unstemmed | Adaptive language model training for molecular design |
title_short | Adaptive language model training for molecular design |
title_sort | adaptive language model training for molecular design |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10249556/ https://www.ncbi.nlm.nih.gov/pubmed/37291633 http://dx.doi.org/10.1186/s13321-023-00719-7 |
work_keys_str_mv | AT blanchardandrewe adaptivelanguagemodeltrainingformoleculardesign AT bhowmikdebsindhu adaptivelanguagemodeltrainingformoleculardesign AT foxzachary adaptivelanguagemodeltrainingformoleculardesign AT gounleyjohn adaptivelanguagemodeltrainingformoleculardesign AT glaserjens adaptivelanguagemodeltrainingformoleculardesign AT akpabelindas adaptivelanguagemodeltrainingformoleculardesign AT irlestephan adaptivelanguagemodeltrainingformoleculardesign |