کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
415997 | 681266 | 2010 | 8 صفحه PDF | دانلود رایگان |
Experimentation in scientific or medical studies is often carried out in order to model the ‘success’ probability of a binary random variable. Experimental designs for the testing of lack of fit and for estimation, for data with binary responses depending upon covariates which can be controlled by the experimenter, are constructed. It is supposed that the preferred model is one in which the probability of the occurrence of the target outcome depends on the covariates through a link function (logistic, probit, etc.) evaluated at a regression response — a function of the covariates and of parameters to be estimated from the data, once gathered. The fit of this model is to be tested within a broad class of alternatives over which the regression response varies. To this end, the problem is phrased as one of discriminating between the preferred model and the class of alternatives. This, in turn, is a hypothesis testing problem, for which the asymptotic power of the test statistic is directly related to the Kullback–Leibler divergence between the models, averaged over the design. ‘Maximin’ designs, which maximize (through the design) the minimum (among the class of alternative models) value of this power together with a measure of the efficiency of the parameter estimates are also constructed. Several examples are presented in detail; two of these relate to a medical study of fluoxetine versus a placebo in depression patients. The method of design construction is computationally intensive, and involves a steepest descent minimization routine coupled with simulated annealing.
Journal: Computational Statistics & Data Analysis - Volume 54, Issue 12, 1 December 2010, Pages 3371–3378