Article ID Journal Published Year Pages File Type
388246 Expert Systems with Applications 2009 8 Pages PDF
Abstract

Computing the posterior probability distribution for a set of query variables by search result is an important task of inferences with a Bayesian network. Starting from real applications, it is also necessary to make inferences when the evidence is not contained in training data. In this paper, we are to augment the learning function to Bayesian network inferences, and extend the classical “search”-based inferences to “search + learning”-based inferences. Based on the support vector machine, we use a class of hyperplanes to construct the hypothesis space. Then we use the method of solving an optimal hyperplane to find a maximum likelihood hypothesis for the value not contained in training data. Further, we give a convergent Gibbs sampling algorithm for approximate probabilistic inference with the presence of maximum likelihood parameters. Preliminary experiments show the feasibility of our proposed methods.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,