Article ID Journal Published Year Pages File Type
1148843 Journal of Statistical Planning and Inference 2006 20 Pages PDF
Abstract

Identifiability has long been an important concept in classical statistical estimation. Historically, Bayesians have been less interested in the concept since, strictly speaking, any parameter having a proper prior distribution also has a proper posterior, and is thus estimable. However, the larger statistical community's recent move toward more Bayesian thinking is largely fueled by an interest in Markov chain Monte Carlo-based analyses using vague or even improper priors. As such, Bayesians have been forced to think more carefully about what has been learned about the parameters of interest (given the data so far), or what could possibly be learned (given an infinite amount of data). In this paper, we propose measures of Bayesian learning based on differences in precision and Kullback–Leibler divergence. After investigating them in the context of some familiar Gaussian linear hierarchical models, we consider their use in a more challenging setting involving two sets of random effects (traditional and spatially arranged), only the sum of which is identified by the data. We illustrate this latter model with an example from periodontal data analysis, where the spatial aspect arises from the proximity of various measurements taken in the mouth. Our results suggest our measures behave sensibly and may be useful in even more complicated (e.g., non-Gaussian) model settings.

Related Topics
Physical Sciences and Engineering Mathematics Applied Mathematics
Authors
, ,