Cargando…

Learning Retrosynthetic Planning through Simulated Experience

[Image: see text] The problem of retrosynthetic planning can be framed as a one-player game, in which the chemist (or a computer program) works backward from a molecular target to simpler starting materials through a series of choices regarding which reactions to perform. This game is challenging as...

Descripción completa

Detalles Bibliográficos
Autores principales: Schreck, John S., Coley, Connor W., Bishop, Kyle J. M.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: American Chemical Society 2019
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6598174/
https://www.ncbi.nlm.nih.gov/pubmed/31263756
http://dx.doi.org/10.1021/acscentsci.9b00055
_version_ 1783430715354906624
author Schreck, John S.
Coley, Connor W.
Bishop, Kyle J. M.
author_facet Schreck, John S.
Coley, Connor W.
Bishop, Kyle J. M.
author_sort Schreck, John S.
collection PubMed
description [Image: see text] The problem of retrosynthetic planning can be framed as a one-player game, in which the chemist (or a computer program) works backward from a molecular target to simpler starting materials through a series of choices regarding which reactions to perform. This game is challenging as the combinatorial space of possible choices is astronomical, and the value of each choice remains uncertain until the synthesis plan is completed and its cost evaluated. Here, we address this search problem using deep reinforcement learning to identify policies that make (near) optimal reaction choices during each step of retrosynthetic planning according to a user-defined cost metric. Using a simulated experience, we train a neural network to estimate the expected synthesis cost or value of any given molecule based on a representation of its molecular structure. We show that learned policies based on this value network can outperform a heuristic approach that favors symmetric disconnections when synthesizing unfamiliar molecules from available starting materials using the fewest number of reactions. We discuss how the learned policies described here can be incorporated into existing synthesis planning tools and how they can be adapted to changes in the synthesis cost objective or material availability.
format Online
Article
Text
id pubmed-6598174
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher American Chemical Society
record_format MEDLINE/PubMed
spelling pubmed-65981742019-07-01 Learning Retrosynthetic Planning through Simulated Experience Schreck, John S. Coley, Connor W. Bishop, Kyle J. M. ACS Cent Sci [Image: see text] The problem of retrosynthetic planning can be framed as a one-player game, in which the chemist (or a computer program) works backward from a molecular target to simpler starting materials through a series of choices regarding which reactions to perform. This game is challenging as the combinatorial space of possible choices is astronomical, and the value of each choice remains uncertain until the synthesis plan is completed and its cost evaluated. Here, we address this search problem using deep reinforcement learning to identify policies that make (near) optimal reaction choices during each step of retrosynthetic planning according to a user-defined cost metric. Using a simulated experience, we train a neural network to estimate the expected synthesis cost or value of any given molecule based on a representation of its molecular structure. We show that learned policies based on this value network can outperform a heuristic approach that favors symmetric disconnections when synthesizing unfamiliar molecules from available starting materials using the fewest number of reactions. We discuss how the learned policies described here can be incorporated into existing synthesis planning tools and how they can be adapted to changes in the synthesis cost objective or material availability. American Chemical Society 2019-05-31 2019-06-26 /pmc/articles/PMC6598174/ /pubmed/31263756 http://dx.doi.org/10.1021/acscentsci.9b00055 Text en Copyright © 2019 American Chemical Society This is an open access article published under an ACS AuthorChoice License (http://pubs.acs.org/page/policy/authorchoice_termsofuse.html) , which permits copying and redistribution of the article or any adaptations for non-commercial purposes.
spellingShingle Schreck, John S.
Coley, Connor W.
Bishop, Kyle J. M.
Learning Retrosynthetic Planning through Simulated Experience
title Learning Retrosynthetic Planning through Simulated Experience
title_full Learning Retrosynthetic Planning through Simulated Experience
title_fullStr Learning Retrosynthetic Planning through Simulated Experience
title_full_unstemmed Learning Retrosynthetic Planning through Simulated Experience
title_short Learning Retrosynthetic Planning through Simulated Experience
title_sort learning retrosynthetic planning through simulated experience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6598174/
https://www.ncbi.nlm.nih.gov/pubmed/31263756
http://dx.doi.org/10.1021/acscentsci.9b00055
work_keys_str_mv AT schreckjohns learningretrosyntheticplanningthroughsimulatedexperience
AT coleyconnorw learningretrosyntheticplanningthroughsimulatedexperience
AT bishopkylejm learningretrosyntheticplanningthroughsimulatedexperience