Article ID Journal Published Year Pages File Type
10437681 Journal of Economic Behavior & Organization 2014 22 Pages PDF
Abstract
In the field of environmental policy, randomized evaluation designs are rare. Thus researchers typically rely on observational designs to evaluate program impacts. To assess the ability of observational designs to replicate the results of experimental designs, researchers use design-replication studies. In our design-replication study, we use data from a large-scale, randomized field experiment that tested the effectiveness of norm-based messages designed to induce voluntary reductions in water use. We attempt to replicate the experimental results using a nonrandomized comparison group and statistical techniques to eliminate or mitigate observable and unobservable sources of bias. In a companion study, Ferraro and Miranda (2013a) replicate the experimental estimates by following best practices to select a non-experimental control group, by using a rich data set on observable characteristics that includes repeated pre- and post-treatment outcome measures, and by combining panel data methods and matching designs. We assess whether non-experimental designs continue to replicate the experimental benchmark when the data are far less rich, as is often the case in environmental policy evaluation. Trimming and inverse probability weighting and simple difference-in-differences designs perform poorly. Pre-processing the data by matching and then estimating the treatment effect with ordinary least squares (OLS) regression performs best, but a bootstrapping exercise suggests the performance can be sensitive to the sample (yet far less sensitive than OLS without pre-processing).
Related Topics
Social Sciences and Humanities Economics, Econometrics and Finance Economics and Econometrics
Authors
, ,