Article ID Journal Published Year Pages File Type
9653606 Neurocomputing 2005 20 Pages PDF
Abstract
We discuss the Bayesian decision theory on neural networks. In the two-category case where the state-conditional probabilities are normal, a three-layer neural network having d hidden layer units can approximate the posterior probability in Lp(Rd,p), where d is the dimension of the space of observables. We extend this result to multicategory cases. Then, the number of the hidden layer units must be increased, but can be bounded by 12d(d+1) irrespective of the number of categories if the neural network has direct connections between the input and output layers. In the case where the state-conditional probability is one of familiar probability distributions such as binomial, multinomial, Poisson, negative binomial distributions and so on, a two-layer neural network can approximate the posterior probability.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,