Cargando…

Efficient algorithms for training the parameters of hidden Markov models using stochastic expectation maximization (EM) training and Viterbi training

BACKGROUND: Hidden Markov models are widely employed by numerous bioinformatics programs used today. Applications range widely from comparative gene prediction to time-series analyses of micro-array data. The parameters of the underlying models need to be adjusted for specific data sets, for example...

Descripción completa

Detalles Bibliográficos
Autores principales: Lam, Tin Y, Meyer, Irmtraud M
Formato: Texto
Lenguaje:English
Publicado: BioMed Central 2010
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3019189/
https://www.ncbi.nlm.nih.gov/pubmed/21143925
http://dx.doi.org/10.1186/1748-7188-5-38
_version_ 1782196181720367104
author Lam, Tin Y
Meyer, Irmtraud M
author_facet Lam, Tin Y
Meyer, Irmtraud M
author_sort Lam, Tin Y
collection PubMed
description BACKGROUND: Hidden Markov models are widely employed by numerous bioinformatics programs used today. Applications range widely from comparative gene prediction to time-series analyses of micro-array data. The parameters of the underlying models need to be adjusted for specific data sets, for example the genome of a particular species, in order to maximize the prediction accuracy. Computationally efficient algorithms for parameter training are thus key to maximizing the usability of a wide range of bioinformatics applications. RESULTS: We introduce two computationally efficient training algorithms, one for Viterbi training and one for stochastic expectation maximization (EM) training, which render the memory requirements independent of the sequence length. Unlike the existing algorithms for Viterbi and stochastic EM training which require a two-step procedure, our two new algorithms require only one step and scan the input sequence in only one direction. We also implement these two new algorithms and the already published linear-memory algorithm for EM training into the hidden Markov model compiler HMM-CONVERTER and examine their respective practical merits for three small example models. CONCLUSIONS: Bioinformatics applications employing hidden Markov models can use the two algorithms in order to make Viterbi training and stochastic EM training more computationally efficient. Using these algorithms, parameter training can thus be attempted for more complex models and longer training sequences. The two new algorithms have the added advantage of being easier to implement than the corresponding default algorithms for Viterbi training and stochastic EM training.
format Text
id pubmed-3019189
institution National Center for Biotechnology Information
language English
publishDate 2010
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-30191892011-01-14 Efficient algorithms for training the parameters of hidden Markov models using stochastic expectation maximization (EM) training and Viterbi training Lam, Tin Y Meyer, Irmtraud M Algorithms Mol Biol Research BACKGROUND: Hidden Markov models are widely employed by numerous bioinformatics programs used today. Applications range widely from comparative gene prediction to time-series analyses of micro-array data. The parameters of the underlying models need to be adjusted for specific data sets, for example the genome of a particular species, in order to maximize the prediction accuracy. Computationally efficient algorithms for parameter training are thus key to maximizing the usability of a wide range of bioinformatics applications. RESULTS: We introduce two computationally efficient training algorithms, one for Viterbi training and one for stochastic expectation maximization (EM) training, which render the memory requirements independent of the sequence length. Unlike the existing algorithms for Viterbi and stochastic EM training which require a two-step procedure, our two new algorithms require only one step and scan the input sequence in only one direction. We also implement these two new algorithms and the already published linear-memory algorithm for EM training into the hidden Markov model compiler HMM-CONVERTER and examine their respective practical merits for three small example models. CONCLUSIONS: Bioinformatics applications employing hidden Markov models can use the two algorithms in order to make Viterbi training and stochastic EM training more computationally efficient. Using these algorithms, parameter training can thus be attempted for more complex models and longer training sequences. The two new algorithms have the added advantage of being easier to implement than the corresponding default algorithms for Viterbi training and stochastic EM training. BioMed Central 2010-12-09 /pmc/articles/PMC3019189/ /pubmed/21143925 http://dx.doi.org/10.1186/1748-7188-5-38 Text en Copyright ©2010 Lam and Meyer; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (<url>http://creativecommons.org/licenses/by/2.0</url>), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research
Lam, Tin Y
Meyer, Irmtraud M
Efficient algorithms for training the parameters of hidden Markov models using stochastic expectation maximization (EM) training and Viterbi training
title Efficient algorithms for training the parameters of hidden Markov models using stochastic expectation maximization (EM) training and Viterbi training
title_full Efficient algorithms for training the parameters of hidden Markov models using stochastic expectation maximization (EM) training and Viterbi training
title_fullStr Efficient algorithms for training the parameters of hidden Markov models using stochastic expectation maximization (EM) training and Viterbi training
title_full_unstemmed Efficient algorithms for training the parameters of hidden Markov models using stochastic expectation maximization (EM) training and Viterbi training
title_short Efficient algorithms for training the parameters of hidden Markov models using stochastic expectation maximization (EM) training and Viterbi training
title_sort efficient algorithms for training the parameters of hidden markov models using stochastic expectation maximization (em) training and viterbi training
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3019189/
https://www.ncbi.nlm.nih.gov/pubmed/21143925
http://dx.doi.org/10.1186/1748-7188-5-38
work_keys_str_mv AT lamtiny efficientalgorithmsfortrainingtheparametersofhiddenmarkovmodelsusingstochasticexpectationmaximizationemtrainingandviterbitraining
AT meyerirmtraudm efficientalgorithmsfortrainingtheparametersofhiddenmarkovmodelsusingstochasticexpectationmaximizationemtrainingandviterbitraining