Cargando…
Composition of Web Services Using Markov Decision Processes and Dynamic Programming
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliabili...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi Publishing Corporation
2015
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4385667/ https://www.ncbi.nlm.nih.gov/pubmed/25874247 http://dx.doi.org/10.1155/2015/545308 |
_version_ | 1782365070693498880 |
---|---|
author | Uc-Cetina, Víctor Moo-Mena, Francisco Hernandez-Ucan, Rafael |
author_facet | Uc-Cetina, Víctor Moo-Mena, Francisco Hernandez-Ucan, Rafael |
author_sort | Uc-Cetina, Víctor |
collection | PubMed |
description | We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. |
format | Online Article Text |
id | pubmed-4385667 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2015 |
publisher | Hindawi Publishing Corporation |
record_format | MEDLINE/PubMed |
spelling | pubmed-43856672015-04-13 Composition of Web Services Using Markov Decision Processes and Dynamic Programming Uc-Cetina, Víctor Moo-Mena, Francisco Hernandez-Ucan, Rafael ScientificWorldJournal Research Article We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. Hindawi Publishing Corporation 2015 2015-03-22 /pmc/articles/PMC4385667/ /pubmed/25874247 http://dx.doi.org/10.1155/2015/545308 Text en Copyright © 2015 Víctor Uc-Cetina et al. https://creativecommons.org/licenses/by/3.0/ This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Article Uc-Cetina, Víctor Moo-Mena, Francisco Hernandez-Ucan, Rafael Composition of Web Services Using Markov Decision Processes and Dynamic Programming |
title | Composition of Web Services Using Markov Decision Processes and Dynamic Programming |
title_full | Composition of Web Services Using Markov Decision Processes and Dynamic Programming |
title_fullStr | Composition of Web Services Using Markov Decision Processes and Dynamic Programming |
title_full_unstemmed | Composition of Web Services Using Markov Decision Processes and Dynamic Programming |
title_short | Composition of Web Services Using Markov Decision Processes and Dynamic Programming |
title_sort | composition of web services using markov decision processes and dynamic programming |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4385667/ https://www.ncbi.nlm.nih.gov/pubmed/25874247 http://dx.doi.org/10.1155/2015/545308 |
work_keys_str_mv | AT uccetinavictor compositionofwebservicesusingmarkovdecisionprocessesanddynamicprogramming AT moomenafrancisco compositionofwebservicesusingmarkovdecisionprocessesanddynamicprogramming AT hernandezucanrafael compositionofwebservicesusingmarkovdecisionprocessesanddynamicprogramming |