| Article ID | Journal | Published Year | Pages | File Type |
|---|---|---|---|---|
| 10524892 | Journal of Statistical Planning and Inference | 2012 | 7 Pages |
Abstract
We provide a decision theoretic approach to the construction of a learning process in the presence of independent and identically distributed observations. Starting with a probability measure representing beliefs about a key parameter, the approach allows the measure to be updated via the solution to a well defined decision problem. While the learning process encompasses the Bayesian approach, a necessary asymptotic consideration then actually implies the Bayesian learning process is best. This conclusion is due to the requirement of posterior consistency for all models and of having standardized losses between probability distributions. This is shown considering a specific continuous model and a very general class of discrete models.
Related Topics
Physical Sciences and Engineering
Mathematics
Applied Mathematics
Authors
Pier Giovanni Bissiri, Stephen G. Walker,
