Cargando…
Diversity oriented Deep Reinforcement Learning for targeted molecule generation
In this work, we explore the potential of deep learning to streamline the process of identifying new potential drugs through the computational generation of molecules with interesting biological properties. Two deep neural networks compose our targeted generation framework: the Generator, which is t...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer International Publishing
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7944916/ https://www.ncbi.nlm.nih.gov/pubmed/33750461 http://dx.doi.org/10.1186/s13321-021-00498-z |
_version_ | 1783662770135236608 |
---|---|
author | Pereira, Tiago Abbasi, Maryam Ribeiro, Bernardete Arrais, Joel P. |
author_facet | Pereira, Tiago Abbasi, Maryam Ribeiro, Bernardete Arrais, Joel P. |
author_sort | Pereira, Tiago |
collection | PubMed |
description | In this work, we explore the potential of deep learning to streamline the process of identifying new potential drugs through the computational generation of molecules with interesting biological properties. Two deep neural networks compose our targeted generation framework: the Generator, which is trained to learn the building rules of valid molecules employing SMILES strings notation, and the Predictor which evaluates the newly generated compounds by predicting their affinity for the desired target. Then, the Generator is optimized through Reinforcement Learning to produce molecules with bespoken properties. The innovation of this approach is the exploratory strategy applied during the reinforcement training process that seeks to add novelty to the generated compounds. This training strategy employs two Generators interchangeably to sample new SMILES: the initially trained model that will remain fixed and a copy of the previous one that will be updated during the training to uncover the most promising molecules. The evolution of the reward assigned by the Predictor determines how often each one is employed to select the next token of the molecule. This strategy establishes a compromise between the need to acquire more information about the chemical space and the need to sample new molecules, with the experience gained so far. To demonstrate the effectiveness of the method, the Generator is trained to design molecules with an optimized coefficient of partition and also high inhibitory power against the Adenosine [Formula: see text] and [Formula: see text] opioid receptors. The results reveal that the model can effectively adjust the newly generated molecules towards the wanted direction. More importantly, it was possible to find promising sets of unique and diverse molecules, which was the main purpose of the newly implemented strategy. |
format | Online Article Text |
id | pubmed-7944916 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Springer International Publishing |
record_format | MEDLINE/PubMed |
spelling | pubmed-79449162021-03-10 Diversity oriented Deep Reinforcement Learning for targeted molecule generation Pereira, Tiago Abbasi, Maryam Ribeiro, Bernardete Arrais, Joel P. J Cheminform Research Article In this work, we explore the potential of deep learning to streamline the process of identifying new potential drugs through the computational generation of molecules with interesting biological properties. Two deep neural networks compose our targeted generation framework: the Generator, which is trained to learn the building rules of valid molecules employing SMILES strings notation, and the Predictor which evaluates the newly generated compounds by predicting their affinity for the desired target. Then, the Generator is optimized through Reinforcement Learning to produce molecules with bespoken properties. The innovation of this approach is the exploratory strategy applied during the reinforcement training process that seeks to add novelty to the generated compounds. This training strategy employs two Generators interchangeably to sample new SMILES: the initially trained model that will remain fixed and a copy of the previous one that will be updated during the training to uncover the most promising molecules. The evolution of the reward assigned by the Predictor determines how often each one is employed to select the next token of the molecule. This strategy establishes a compromise between the need to acquire more information about the chemical space and the need to sample new molecules, with the experience gained so far. To demonstrate the effectiveness of the method, the Generator is trained to design molecules with an optimized coefficient of partition and also high inhibitory power against the Adenosine [Formula: see text] and [Formula: see text] opioid receptors. The results reveal that the model can effectively adjust the newly generated molecules towards the wanted direction. More importantly, it was possible to find promising sets of unique and diverse molecules, which was the main purpose of the newly implemented strategy. Springer International Publishing 2021-03-09 /pmc/articles/PMC7944916/ /pubmed/33750461 http://dx.doi.org/10.1186/s13321-021-00498-z Text en © The Author(s) 2021 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Article Pereira, Tiago Abbasi, Maryam Ribeiro, Bernardete Arrais, Joel P. Diversity oriented Deep Reinforcement Learning for targeted molecule generation |
title | Diversity oriented Deep Reinforcement Learning for targeted molecule generation |
title_full | Diversity oriented Deep Reinforcement Learning for targeted molecule generation |
title_fullStr | Diversity oriented Deep Reinforcement Learning for targeted molecule generation |
title_full_unstemmed | Diversity oriented Deep Reinforcement Learning for targeted molecule generation |
title_short | Diversity oriented Deep Reinforcement Learning for targeted molecule generation |
title_sort | diversity oriented deep reinforcement learning for targeted molecule generation |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7944916/ https://www.ncbi.nlm.nih.gov/pubmed/33750461 http://dx.doi.org/10.1186/s13321-021-00498-z |
work_keys_str_mv | AT pereiratiago diversityorienteddeepreinforcementlearningfortargetedmoleculegeneration AT abbasimaryam diversityorienteddeepreinforcementlearningfortargetedmoleculegeneration AT ribeirobernardete diversityorienteddeepreinforcementlearningfortargetedmoleculegeneration AT arraisjoelp diversityorienteddeepreinforcementlearningfortargetedmoleculegeneration |