Cargando…

Reinforcement Learning for Dynamic Microfluidic Control

[Image: see text] Recent years have witnessed an explosion in the application of microfluidic techniques to a wide variety of problems in the chemical and biological sciences. Despite the many considerable advantages that microfluidic systems bring to experimental science, microfluidic platforms oft...

Descripción completa

Detalles Bibliográficos
Autores principales: Dressler, Oliver J., Howes, Philip D., Choo, Jaebum, deMello, Andrew J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: American Chemical Society 2018
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6644574/
https://www.ncbi.nlm.nih.gov/pubmed/31459137
http://dx.doi.org/10.1021/acsomega.8b01485
_version_ 1783437285334712320
author Dressler, Oliver J.
Howes, Philip D.
Choo, Jaebum
deMello, Andrew J.
author_facet Dressler, Oliver J.
Howes, Philip D.
Choo, Jaebum
deMello, Andrew J.
author_sort Dressler, Oliver J.
collection PubMed
description [Image: see text] Recent years have witnessed an explosion in the application of microfluidic techniques to a wide variety of problems in the chemical and biological sciences. Despite the many considerable advantages that microfluidic systems bring to experimental science, microfluidic platforms often exhibit inconsistent system performance when operated over extended timescales. Such variations in performance are because of a multiplicity of factors, including microchannel fouling, substrate deformation, temperature and pressure fluctuations, and inherent manufacturing irregularities. The introduction and integration of advanced control algorithms in microfluidic platforms can help mitigate such inconsistencies, paving the way for robust and repeatable long-term experiments. Herein, two state-of-the-art reinforcement learning algorithms, based on Deep Q-Networks and model-free episodic controllers, are applied to two experimental “challenges,” involving both continuous-flow and segmented-flow microfluidic systems. The algorithms are able to attain superhuman performance in controlling and processing each experiment, highlighting the utility of novel control algorithms for automated high-throughput microfluidic experimentation.
format Online
Article
Text
id pubmed-6644574
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher American Chemical Society
record_format MEDLINE/PubMed
spelling pubmed-66445742019-08-27 Reinforcement Learning for Dynamic Microfluidic Control Dressler, Oliver J. Howes, Philip D. Choo, Jaebum deMello, Andrew J. ACS Omega [Image: see text] Recent years have witnessed an explosion in the application of microfluidic techniques to a wide variety of problems in the chemical and biological sciences. Despite the many considerable advantages that microfluidic systems bring to experimental science, microfluidic platforms often exhibit inconsistent system performance when operated over extended timescales. Such variations in performance are because of a multiplicity of factors, including microchannel fouling, substrate deformation, temperature and pressure fluctuations, and inherent manufacturing irregularities. The introduction and integration of advanced control algorithms in microfluidic platforms can help mitigate such inconsistencies, paving the way for robust and repeatable long-term experiments. Herein, two state-of-the-art reinforcement learning algorithms, based on Deep Q-Networks and model-free episodic controllers, are applied to two experimental “challenges,” involving both continuous-flow and segmented-flow microfluidic systems. The algorithms are able to attain superhuman performance in controlling and processing each experiment, highlighting the utility of novel control algorithms for automated high-throughput microfluidic experimentation. American Chemical Society 2018-08-29 /pmc/articles/PMC6644574/ /pubmed/31459137 http://dx.doi.org/10.1021/acsomega.8b01485 Text en Copyright © 2018 American Chemical Society This is an open access article published under an ACS AuthorChoice License (http://pubs.acs.org/page/policy/authorchoice_termsofuse.html) , which permits copying and redistribution of the article or any adaptations for non-commercial purposes.
spellingShingle Dressler, Oliver J.
Howes, Philip D.
Choo, Jaebum
deMello, Andrew J.
Reinforcement Learning for Dynamic Microfluidic Control
title Reinforcement Learning for Dynamic Microfluidic Control
title_full Reinforcement Learning for Dynamic Microfluidic Control
title_fullStr Reinforcement Learning for Dynamic Microfluidic Control
title_full_unstemmed Reinforcement Learning for Dynamic Microfluidic Control
title_short Reinforcement Learning for Dynamic Microfluidic Control
title_sort reinforcement learning for dynamic microfluidic control
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6644574/
https://www.ncbi.nlm.nih.gov/pubmed/31459137
http://dx.doi.org/10.1021/acsomega.8b01485
work_keys_str_mv AT dressleroliverj reinforcementlearningfordynamicmicrofluidiccontrol
AT howesphilipd reinforcementlearningfordynamicmicrofluidiccontrol
AT choojaebum reinforcementlearningfordynamicmicrofluidiccontrol
AT demelloandrewj reinforcementlearningfordynamicmicrofluidiccontrol