Cargando…

Empirical evaluation of scoring functions for Bayesian network model selection

In this work, we empirically evaluate the capability of various scoring functions of Bayesian networks for recovering true underlying structures. Similar investigations have been carried out before, but they typically relied on approximate learning algorithms to learn the network structures. The sub...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Zhifa, Malone, Brandon, Yuan, Changhe
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2012
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3439716/
https://www.ncbi.nlm.nih.gov/pubmed/23046392
http://dx.doi.org/10.1186/1471-2105-13-S15-S14
_version_ 1782243052826394624
author Liu, Zhifa
Malone, Brandon
Yuan, Changhe
author_facet Liu, Zhifa
Malone, Brandon
Yuan, Changhe
author_sort Liu, Zhifa
collection PubMed
description In this work, we empirically evaluate the capability of various scoring functions of Bayesian networks for recovering true underlying structures. Similar investigations have been carried out before, but they typically relied on approximate learning algorithms to learn the network structures. The suboptimal structures found by the approximation methods have unknown quality and may affect the reliability of their conclusions. Our study uses an optimal algorithm to learn Bayesian network structures from datasets generated from a set of gold standard Bayesian networks. Because all optimal algorithms always learn equivalent networks, this ensures that only the choice of scoring function affects the learned networks. Another shortcoming of the previous studies stems from their use of random synthetic networks as test cases. There is no guarantee that these networks reflect real-world data. We use real-world data to generate our gold-standard structures, so our experimental design more closely approximates real-world situations. A major finding of our study suggests that, in contrast to results reported by several prior works, the Minimum Description Length (MDL) (or equivalently, Bayesian information criterion (BIC)) consistently outperforms other scoring functions such as Akaike's information criterion (AIC), Bayesian Dirichlet equivalence score (BDeu), and factorized normalized maximum likelihood (fNML) in recovering the underlying Bayesian network structures. We believe this finding is a result of using both datasets generated from real-world applications rather than from random processes used in previous studies and learning algorithms to select high-scoring structures rather than selecting random models. Other findings of our study support existing work, e.g., large sample sizes result in learning structures closer to the true underlying structure; the BDeu score is sensitive to the parameter settings; and the fNML performs pretty well on small datasets. We also tested a greedy hill climbing algorithm and observed similar results as the optimal algorithm.
format Online
Article
Text
id pubmed-3439716
institution National Center for Biotechnology Information
language English
publishDate 2012
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-34397162012-09-17 Empirical evaluation of scoring functions for Bayesian network model selection Liu, Zhifa Malone, Brandon Yuan, Changhe BMC Bioinformatics Proceedings In this work, we empirically evaluate the capability of various scoring functions of Bayesian networks for recovering true underlying structures. Similar investigations have been carried out before, but they typically relied on approximate learning algorithms to learn the network structures. The suboptimal structures found by the approximation methods have unknown quality and may affect the reliability of their conclusions. Our study uses an optimal algorithm to learn Bayesian network structures from datasets generated from a set of gold standard Bayesian networks. Because all optimal algorithms always learn equivalent networks, this ensures that only the choice of scoring function affects the learned networks. Another shortcoming of the previous studies stems from their use of random synthetic networks as test cases. There is no guarantee that these networks reflect real-world data. We use real-world data to generate our gold-standard structures, so our experimental design more closely approximates real-world situations. A major finding of our study suggests that, in contrast to results reported by several prior works, the Minimum Description Length (MDL) (or equivalently, Bayesian information criterion (BIC)) consistently outperforms other scoring functions such as Akaike's information criterion (AIC), Bayesian Dirichlet equivalence score (BDeu), and factorized normalized maximum likelihood (fNML) in recovering the underlying Bayesian network structures. We believe this finding is a result of using both datasets generated from real-world applications rather than from random processes used in previous studies and learning algorithms to select high-scoring structures rather than selecting random models. Other findings of our study support existing work, e.g., large sample sizes result in learning structures closer to the true underlying structure; the BDeu score is sensitive to the parameter settings; and the fNML performs pretty well on small datasets. We also tested a greedy hill climbing algorithm and observed similar results as the optimal algorithm. BioMed Central 2012-09-11 /pmc/articles/PMC3439716/ /pubmed/23046392 http://dx.doi.org/10.1186/1471-2105-13-S15-S14 Text en Copyright ©2012 Liu et al.; licensee BioMed Central Ltd. http://creativecommons.org/licenses/by/2.0 This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Proceedings
Liu, Zhifa
Malone, Brandon
Yuan, Changhe
Empirical evaluation of scoring functions for Bayesian network model selection
title Empirical evaluation of scoring functions for Bayesian network model selection
title_full Empirical evaluation of scoring functions for Bayesian network model selection
title_fullStr Empirical evaluation of scoring functions for Bayesian network model selection
title_full_unstemmed Empirical evaluation of scoring functions for Bayesian network model selection
title_short Empirical evaluation of scoring functions for Bayesian network model selection
title_sort empirical evaluation of scoring functions for bayesian network model selection
topic Proceedings
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3439716/
https://www.ncbi.nlm.nih.gov/pubmed/23046392
http://dx.doi.org/10.1186/1471-2105-13-S15-S14
work_keys_str_mv AT liuzhifa empiricalevaluationofscoringfunctionsforbayesiannetworkmodelselection
AT malonebrandon empiricalevaluationofscoringfunctionsforbayesiannetworkmodelselection
AT yuanchanghe empiricalevaluationofscoringfunctionsforbayesiannetworkmodelselection