Disclaimer. Don't rely on these old notes in lieu of reading the literature, but they can jog your memory. As a grad student long ago, my peers and I collaborated to write and exchange summaries of political science research. I posted them to a wiki-style website. "Wikisum" is now dead but archived here. I cannot vouch for these notes' accuracy, nor can I even say who wrote them. If you have more recent summaries to add to this collection, send them my way I guess. Sorry for the ads; they cover the costs of keeping this online.
Bowler and Donovan. 2004. Measuring the effect of direct democracy on state policy: Not all initiatives are created equal. State Politics and Policy Quarterly 4:345-363.
You can't identify the effects of direct democracy by simply plugging in a dummy for "initiative states." You must look at how initiatives vary across states. The authors identify two dimensions of variation: X1: How difficult it is to get a proposal on the ballot; X2: How easy it is for the legislature to override the initiative.
COMMENTS AND CRITICISM
Though the authors begin with a perfectly valid point, their implementation fails to satisfy. They commit the grossest sin of political science: They simply make up an additive index for each of their two dimensions (see Appendix, p 17). Trouble is, I don't see why I should expect the indicators in these indices to be at all additive. The authors make no effort to explain why these variables should form an index together. Moreover, the two indices are strongly correlated (r > 0.7), implying that the authors have identified only one dimension, not two.
They attempt to test their idea by replicating a few recent articles. In place of the original authors' dummy variables, they insert their own variables. But why do cross-sectional tests like these? Why not look instead at institutions over time, or at variation before and after adopting the initiative? [perhaps lack of data, considering the time period in which initiatives were adopted.]
A bigger problem is that the authors don't directly pit their new variables against the old dummies. Look at Table 1 (p 25). They run separate regressions for the old and new variables. They don't even put their own two variables into the same regression! Moreover, neither X1 nor X2 explains any more variance in policy outcomes than dummies did.
Good idea, terrible implementation.
Research on similar subjects