Cargando…
A proof of concept reinforcement learning based tool for non parametric population pharmacokinetics workflow optimization
The building of population pharmacokinetic models can be described as an iterative process in which given a model and a dataset, the pharmacometrician introduces some changes to the model specification, then perform an evaluation and based on the predictions obtained performs further optimization. T...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9938066/ https://www.ncbi.nlm.nih.gov/pubmed/36478350 http://dx.doi.org/10.1007/s10928-022-09829-5 |
_version_ | 1784890560149979136 |
---|---|
author | Otalvaro, J. D. Yamada, W. M. Hernandez, A. M. Zuluaga, A. F. Chen, R. Neely, M. N. |
author_facet | Otalvaro, J. D. Yamada, W. M. Hernandez, A. M. Zuluaga, A. F. Chen, R. Neely, M. N. |
author_sort | Otalvaro, J. D. |
collection | PubMed |
description | The building of population pharmacokinetic models can be described as an iterative process in which given a model and a dataset, the pharmacometrician introduces some changes to the model specification, then perform an evaluation and based on the predictions obtained performs further optimization. This process (perform an action, witness a result, optimize your knowledge) is a perfect scenario for the implementation of Reinforcement Learning algorithms. In this paper we present the conceptual background and a implementation of one of those algorithms aiming to show pharmacometricians how to automate (to a certain point) the iterative model building process.We present the selected discretization for the action and the state space. SARSA (State-Action-Reward-State-Action) was selected as the RL algorithm to use, configured with a window of 1000 episodes with and a limit of 30 actions per episode. SARSA was configured to control an interface to the Non-Parametric Optimal Design algorithm, that was actually performing the parameter optimization.The Reinforcement Learning (RL) based agent managed to obtain the same likelihood and number of support points, with a distribution similar to the reported in the original paper. The total amount of time used by the train the agent was 5.5 h although we think this time can be further improved. It is possible to automatically find the structural model that maximizes the final likelihood for an specific pharmacokinetic dataset by using RL algorithm. The framework provided could allow the integration of even more actions i.e: add/remove covariates, non-linear compartments or the execution of secondary analysis. Many limitations were found while performing this study but we hope to address them all in future studies. |
format | Online Article Text |
id | pubmed-9938066 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer US |
record_format | MEDLINE/PubMed |
spelling | pubmed-99380662023-02-19 A proof of concept reinforcement learning based tool for non parametric population pharmacokinetics workflow optimization Otalvaro, J. D. Yamada, W. M. Hernandez, A. M. Zuluaga, A. F. Chen, R. Neely, M. N. J Pharmacokinet Pharmacodyn Original Paper The building of population pharmacokinetic models can be described as an iterative process in which given a model and a dataset, the pharmacometrician introduces some changes to the model specification, then perform an evaluation and based on the predictions obtained performs further optimization. This process (perform an action, witness a result, optimize your knowledge) is a perfect scenario for the implementation of Reinforcement Learning algorithms. In this paper we present the conceptual background and a implementation of one of those algorithms aiming to show pharmacometricians how to automate (to a certain point) the iterative model building process.We present the selected discretization for the action and the state space. SARSA (State-Action-Reward-State-Action) was selected as the RL algorithm to use, configured with a window of 1000 episodes with and a limit of 30 actions per episode. SARSA was configured to control an interface to the Non-Parametric Optimal Design algorithm, that was actually performing the parameter optimization.The Reinforcement Learning (RL) based agent managed to obtain the same likelihood and number of support points, with a distribution similar to the reported in the original paper. The total amount of time used by the train the agent was 5.5 h although we think this time can be further improved. It is possible to automatically find the structural model that maximizes the final likelihood for an specific pharmacokinetic dataset by using RL algorithm. The framework provided could allow the integration of even more actions i.e: add/remove covariates, non-linear compartments or the execution of secondary analysis. Many limitations were found while performing this study but we hope to address them all in future studies. Springer US 2022-12-07 2023 /pmc/articles/PMC9938066/ /pubmed/36478350 http://dx.doi.org/10.1007/s10928-022-09829-5 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Original Paper Otalvaro, J. D. Yamada, W. M. Hernandez, A. M. Zuluaga, A. F. Chen, R. Neely, M. N. A proof of concept reinforcement learning based tool for non parametric population pharmacokinetics workflow optimization |
title | A proof of concept reinforcement learning based tool for non parametric population pharmacokinetics workflow optimization |
title_full | A proof of concept reinforcement learning based tool for non parametric population pharmacokinetics workflow optimization |
title_fullStr | A proof of concept reinforcement learning based tool for non parametric population pharmacokinetics workflow optimization |
title_full_unstemmed | A proof of concept reinforcement learning based tool for non parametric population pharmacokinetics workflow optimization |
title_short | A proof of concept reinforcement learning based tool for non parametric population pharmacokinetics workflow optimization |
title_sort | proof of concept reinforcement learning based tool for non parametric population pharmacokinetics workflow optimization |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9938066/ https://www.ncbi.nlm.nih.gov/pubmed/36478350 http://dx.doi.org/10.1007/s10928-022-09829-5 |
work_keys_str_mv | AT otalvarojd aproofofconceptreinforcementlearningbasedtoolfornonparametricpopulationpharmacokineticsworkflowoptimization AT yamadawm aproofofconceptreinforcementlearningbasedtoolfornonparametricpopulationpharmacokineticsworkflowoptimization AT hernandezam aproofofconceptreinforcementlearningbasedtoolfornonparametricpopulationpharmacokineticsworkflowoptimization AT zuluagaaf aproofofconceptreinforcementlearningbasedtoolfornonparametricpopulationpharmacokineticsworkflowoptimization AT chenr aproofofconceptreinforcementlearningbasedtoolfornonparametricpopulationpharmacokineticsworkflowoptimization AT neelymn aproofofconceptreinforcementlearningbasedtoolfornonparametricpopulationpharmacokineticsworkflowoptimization AT otalvarojd proofofconceptreinforcementlearningbasedtoolfornonparametricpopulationpharmacokineticsworkflowoptimization AT yamadawm proofofconceptreinforcementlearningbasedtoolfornonparametricpopulationpharmacokineticsworkflowoptimization AT hernandezam proofofconceptreinforcementlearningbasedtoolfornonparametricpopulationpharmacokineticsworkflowoptimization AT zuluagaaf proofofconceptreinforcementlearningbasedtoolfornonparametricpopulationpharmacokineticsworkflowoptimization AT chenr proofofconceptreinforcementlearningbasedtoolfornonparametricpopulationpharmacokineticsworkflowoptimization AT neelymn proofofconceptreinforcementlearningbasedtoolfornonparametricpopulationpharmacokineticsworkflowoptimization |