Cargando…
Deep reinforcement learning for de novo drug design
We have devised and implemented a novel computational strategy for de novo design of molecules with desired properties termed ReLeaSE (Reinforcement Learning for Structural Evolution). On the basis of deep and reinforcement learning (RL) approaches, ReLeaSE integrates two deep neural networks—genera...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
American Association for the Advancement of Science
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6059760/ https://www.ncbi.nlm.nih.gov/pubmed/30050984 http://dx.doi.org/10.1126/sciadv.aap7885 |
_version_ | 1783341924193665024 |
---|---|
author | Popova, Mariya Isayev, Olexandr Tropsha, Alexander |
author_facet | Popova, Mariya Isayev, Olexandr Tropsha, Alexander |
author_sort | Popova, Mariya |
collection | PubMed |
description | We have devised and implemented a novel computational strategy for de novo design of molecules with desired properties termed ReLeaSE (Reinforcement Learning for Structural Evolution). On the basis of deep and reinforcement learning (RL) approaches, ReLeaSE integrates two deep neural networks—generative and predictive—that are trained separately but are used jointly to generate novel targeted chemical libraries. ReLeaSE uses simple representation of molecules by their simplified molecular-input line-entry system (SMILES) strings only. Generative models are trained with a stack-augmented memory network to produce chemically feasible SMILES strings, and predictive models are derived to forecast the desired properties of the de novo–generated compounds. In the first phase of the method, generative and predictive models are trained separately with a supervised learning algorithm. In the second phase, both models are trained jointly with the RL approach to bias the generation of new chemical structures toward those with the desired physical and/or biological properties. In the proof-of-concept study, we have used the ReLeaSE method to design chemical libraries with a bias toward structural complexity or toward compounds with maximal, minimal, or specific range of physical properties, such as melting point or hydrophobicity, or toward compounds with inhibitory activity against Janus protein kinase 2. The approach proposed herein can find a general use for generating targeted chemical libraries of novel compounds optimized for either a single desired property or multiple properties. |
format | Online Article Text |
id | pubmed-6059760 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | American Association for the Advancement of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-60597602018-07-26 Deep reinforcement learning for de novo drug design Popova, Mariya Isayev, Olexandr Tropsha, Alexander Sci Adv Research Articles We have devised and implemented a novel computational strategy for de novo design of molecules with desired properties termed ReLeaSE (Reinforcement Learning for Structural Evolution). On the basis of deep and reinforcement learning (RL) approaches, ReLeaSE integrates two deep neural networks—generative and predictive—that are trained separately but are used jointly to generate novel targeted chemical libraries. ReLeaSE uses simple representation of molecules by their simplified molecular-input line-entry system (SMILES) strings only. Generative models are trained with a stack-augmented memory network to produce chemically feasible SMILES strings, and predictive models are derived to forecast the desired properties of the de novo–generated compounds. In the first phase of the method, generative and predictive models are trained separately with a supervised learning algorithm. In the second phase, both models are trained jointly with the RL approach to bias the generation of new chemical structures toward those with the desired physical and/or biological properties. In the proof-of-concept study, we have used the ReLeaSE method to design chemical libraries with a bias toward structural complexity or toward compounds with maximal, minimal, or specific range of physical properties, such as melting point or hydrophobicity, or toward compounds with inhibitory activity against Janus protein kinase 2. The approach proposed herein can find a general use for generating targeted chemical libraries of novel compounds optimized for either a single desired property or multiple properties. American Association for the Advancement of Science 2018-07-25 /pmc/articles/PMC6059760/ /pubmed/30050984 http://dx.doi.org/10.1126/sciadv.aap7885 Text en Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. Distributed under a Creative Commons Attribution NonCommercial License 4.0 (CC BY-NC). http://creativecommons.org/licenses/by-nc/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license (http://creativecommons.org/licenses/by-nc/4.0/) , which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited. |
spellingShingle | Research Articles Popova, Mariya Isayev, Olexandr Tropsha, Alexander Deep reinforcement learning for de novo drug design |
title | Deep reinforcement learning for de novo drug design |
title_full | Deep reinforcement learning for de novo drug design |
title_fullStr | Deep reinforcement learning for de novo drug design |
title_full_unstemmed | Deep reinforcement learning for de novo drug design |
title_short | Deep reinforcement learning for de novo drug design |
title_sort | deep reinforcement learning for de novo drug design |
topic | Research Articles |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6059760/ https://www.ncbi.nlm.nih.gov/pubmed/30050984 http://dx.doi.org/10.1126/sciadv.aap7885 |
work_keys_str_mv | AT popovamariya deepreinforcementlearningfordenovodrugdesign AT isayevolexandr deepreinforcementlearningfordenovodrugdesign AT tropshaalexander deepreinforcementlearningfordenovodrugdesign |