Cargando…

Molecule generation using transformers and policy gradient reinforcement learning

Generating novel valid molecules is often a difficult task, because the vast chemical space relies on the intuition of experienced chemists. In recent years, deep learning models have helped accelerate this process. These advanced models can also help identify suitable molecules for disease treatmen...

Descripción completa

Detalles Bibliográficos
Autores principales: Mazuz, Eyal, Shtar, Guy, Shapira, Bracha, Rokach, Lior
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10232454/
https://www.ncbi.nlm.nih.gov/pubmed/37258546
http://dx.doi.org/10.1038/s41598-023-35648-w
_version_ 1785051982081294336
author Mazuz, Eyal
Shtar, Guy
Shapira, Bracha
Rokach, Lior
author_facet Mazuz, Eyal
Shtar, Guy
Shapira, Bracha
Rokach, Lior
author_sort Mazuz, Eyal
collection PubMed
description Generating novel valid molecules is often a difficult task, because the vast chemical space relies on the intuition of experienced chemists. In recent years, deep learning models have helped accelerate this process. These advanced models can also help identify suitable molecules for disease treatment. In this paper, we propose Taiga, a transformer-based architecture for the generation of molecules with desired properties. Using a two-stage approach, we first treat the problem as a language modeling task of predicting the next token, using SMILES strings. Then, we use reinforcement learning to optimize molecular properties such as QED. This approach allows our model to learn the underlying rules of chemistry and more easily optimize for molecules with desired properties. Our evaluation of Taiga, which was performed with multiple datasets and tasks, shows that Taiga is comparable to, or even outperforms, state-of-the-art baselines for molecule optimization, with improvements in the QED ranging from 2 to over 20 percent. The improvement was demonstrated both on datasets containing lead molecules and random molecules. We also show that with its two stages, Taiga is capable of generating molecules with higher biological property scores than the same model without reinforcement learning.
format Online
Article
Text
id pubmed-10232454
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-102324542023-06-02 Molecule generation using transformers and policy gradient reinforcement learning Mazuz, Eyal Shtar, Guy Shapira, Bracha Rokach, Lior Sci Rep Article Generating novel valid molecules is often a difficult task, because the vast chemical space relies on the intuition of experienced chemists. In recent years, deep learning models have helped accelerate this process. These advanced models can also help identify suitable molecules for disease treatment. In this paper, we propose Taiga, a transformer-based architecture for the generation of molecules with desired properties. Using a two-stage approach, we first treat the problem as a language modeling task of predicting the next token, using SMILES strings. Then, we use reinforcement learning to optimize molecular properties such as QED. This approach allows our model to learn the underlying rules of chemistry and more easily optimize for molecules with desired properties. Our evaluation of Taiga, which was performed with multiple datasets and tasks, shows that Taiga is comparable to, or even outperforms, state-of-the-art baselines for molecule optimization, with improvements in the QED ranging from 2 to over 20 percent. The improvement was demonstrated both on datasets containing lead molecules and random molecules. We also show that with its two stages, Taiga is capable of generating molecules with higher biological property scores than the same model without reinforcement learning. Nature Publishing Group UK 2023-05-31 /pmc/articles/PMC10232454/ /pubmed/37258546 http://dx.doi.org/10.1038/s41598-023-35648-w Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Mazuz, Eyal
Shtar, Guy
Shapira, Bracha
Rokach, Lior
Molecule generation using transformers and policy gradient reinforcement learning
title Molecule generation using transformers and policy gradient reinforcement learning
title_full Molecule generation using transformers and policy gradient reinforcement learning
title_fullStr Molecule generation using transformers and policy gradient reinforcement learning
title_full_unstemmed Molecule generation using transformers and policy gradient reinforcement learning
title_short Molecule generation using transformers and policy gradient reinforcement learning
title_sort molecule generation using transformers and policy gradient reinforcement learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10232454/
https://www.ncbi.nlm.nih.gov/pubmed/37258546
http://dx.doi.org/10.1038/s41598-023-35648-w
work_keys_str_mv AT mazuzeyal moleculegenerationusingtransformersandpolicygradientreinforcementlearning
AT shtarguy moleculegenerationusingtransformersandpolicygradientreinforcementlearning
AT shapirabracha moleculegenerationusingtransformersandpolicygradientreinforcementlearning
AT rokachlior moleculegenerationusingtransformersandpolicygradientreinforcementlearning