Cargando…
Bayesian Network Model Averaging Classifiers by Subbagging
When applied to classification problems, Bayesian networks are often used to infer a class variable when given feature variables. Earlier reports have described that the classification accuracy of Bayesian network structures achieved by maximizing the marginal likelihood (ML) is lower than that achi...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9140381/ https://www.ncbi.nlm.nih.gov/pubmed/35626626 http://dx.doi.org/10.3390/e24050743 |
_version_ | 1784715081797337088 |
---|---|
author | Sugahara, Shouta Aomi, Itsuki Ueno, Maomi |
author_facet | Sugahara, Shouta Aomi, Itsuki Ueno, Maomi |
author_sort | Sugahara, Shouta |
collection | PubMed |
description | When applied to classification problems, Bayesian networks are often used to infer a class variable when given feature variables. Earlier reports have described that the classification accuracy of Bayesian network structures achieved by maximizing the marginal likelihood (ML) is lower than that achieved by maximizing the conditional log likelihood (CLL) of a class variable given the feature variables. Nevertheless, because ML has asymptotic consistency, the performance of Bayesian network structures achieved by maximizing ML is not necessarily worse than that achieved by maximizing CLL for large data. However, the error of learning structures by maximizing the ML becomes much larger for small sample sizes. That large error degrades the classification accuracy. As a method to resolve this shortcoming, model averaging has been proposed to marginalize the class variable posterior over all structures. However, the posterior standard error of each structure in the model averaging becomes large as the sample size becomes small; it subsequently degrades the classification accuracy. The main idea of this study is to improve the classification accuracy using subbagging, which is modified bagging using random sampling without replacement, to reduce the posterior standard error of each structure in model averaging. Moreover, to guarantee asymptotic consistency, we use the K-best method with the ML score. The experimentally obtained results demonstrate that our proposed method provides more accurate classification than earlier BNC methods and the other state-of-the-art ensemble methods do. |
format | Online Article Text |
id | pubmed-9140381 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-91403812022-05-28 Bayesian Network Model Averaging Classifiers by Subbagging Sugahara, Shouta Aomi, Itsuki Ueno, Maomi Entropy (Basel) Article When applied to classification problems, Bayesian networks are often used to infer a class variable when given feature variables. Earlier reports have described that the classification accuracy of Bayesian network structures achieved by maximizing the marginal likelihood (ML) is lower than that achieved by maximizing the conditional log likelihood (CLL) of a class variable given the feature variables. Nevertheless, because ML has asymptotic consistency, the performance of Bayesian network structures achieved by maximizing ML is not necessarily worse than that achieved by maximizing CLL for large data. However, the error of learning structures by maximizing the ML becomes much larger for small sample sizes. That large error degrades the classification accuracy. As a method to resolve this shortcoming, model averaging has been proposed to marginalize the class variable posterior over all structures. However, the posterior standard error of each structure in the model averaging becomes large as the sample size becomes small; it subsequently degrades the classification accuracy. The main idea of this study is to improve the classification accuracy using subbagging, which is modified bagging using random sampling without replacement, to reduce the posterior standard error of each structure in model averaging. Moreover, to guarantee asymptotic consistency, we use the K-best method with the ML score. The experimentally obtained results demonstrate that our proposed method provides more accurate classification than earlier BNC methods and the other state-of-the-art ensemble methods do. MDPI 2022-05-23 /pmc/articles/PMC9140381/ /pubmed/35626626 http://dx.doi.org/10.3390/e24050743 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Sugahara, Shouta Aomi, Itsuki Ueno, Maomi Bayesian Network Model Averaging Classifiers by Subbagging |
title | Bayesian Network Model Averaging Classifiers by Subbagging |
title_full | Bayesian Network Model Averaging Classifiers by Subbagging |
title_fullStr | Bayesian Network Model Averaging Classifiers by Subbagging |
title_full_unstemmed | Bayesian Network Model Averaging Classifiers by Subbagging |
title_short | Bayesian Network Model Averaging Classifiers by Subbagging |
title_sort | bayesian network model averaging classifiers by subbagging |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9140381/ https://www.ncbi.nlm.nih.gov/pubmed/35626626 http://dx.doi.org/10.3390/e24050743 |
work_keys_str_mv | AT sugaharashouta bayesiannetworkmodelaveragingclassifiersbysubbagging AT aomiitsuki bayesiannetworkmodelaveragingclassifiersbysubbagging AT uenomaomi bayesiannetworkmodelaveragingclassifiersbysubbagging |