Cargando…
Universal Target Learning: An Efficient and Effective Technique for Semi-Naive Bayesian Learning
To mitigate the negative effect of classification bias caused by overfitting, semi-naive Bayesian techniques seek to mine the implicit dependency relationships in unlabeled testing instances. By redefining some criteria from information theory, Target Learning (TL) proposes to build for each unlabel...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7515258/ https://www.ncbi.nlm.nih.gov/pubmed/33267443 http://dx.doi.org/10.3390/e21080729 |
Sumario: | To mitigate the negative effect of classification bias caused by overfitting, semi-naive Bayesian techniques seek to mine the implicit dependency relationships in unlabeled testing instances. By redefining some criteria from information theory, Target Learning (TL) proposes to build for each unlabeled testing instance [Formula: see text] the Bayesian Network Classifier BNC [Formula: see text] , which is independent and complementary to BNC [Formula: see text] learned from training data [Formula: see text]. In this paper, we extend TL to Universal Target Learning (UTL) to identify redundant correlations between attribute values and maximize the bits encoded in the Bayesian network in terms of log likelihood. We take the k-dependence Bayesian classifier as an example to investigate the effect of UTL on BNC [Formula: see text] and BNC [Formula: see text]. Our extensive experimental results on 40 UCI datasets show that UTL can help BNC improve the generalization performance. |
---|