Saturday, April 14, 2012

The Milwaukee Voucher Program

In the last post, I noted that what we really care about when evaluating the effectiveness of voucher programs is how the voucher students would have done if they had remained in public schools, and how public school students would have done if the voucher students had remained in public schools. One can make a stab at this first counter-factual by exploiting the mechanism in which vouchers are awarded: by lottery (a random assignment). Some students of similar backgrounds and schools get vouchers while others must remain in public schools. By comparing the outcomes of the students who got the vouchers to those who applied but did not get the vouchers, we can get an estimate of how the voucher students would have done had they remained in public schools.

The real debate over the effectiveness of the Milwaukee program (esp. on math and reading test scores) looked something like this:
  • 1991: Milwaukee program adopted
  • 1994/5: John Witte of Wisconsin's political science department publishes the first official review using basic techniques; finds zero or small negative effects of the voucher program (Witte publishes updates on the program regularly and generally finds mixed results).
  • 1995: Paul Peterson of Harvard reanalyzes Witte's data and compares attenders of voucher schools to other applicants and finds big, positive effects. A problem with his analysis is that the attenders are not a random sample of the winners of the vouchers which may bias results in the positive direction. Peterson's research objectivity does come into question in subsequent years as he keeps publishing -- he only publishes when he finds large positive effects and others do not, and not always in refereed venues. Witte begins to call attention to "characteristics of pseudo-scientific right-wing trash" in some of his presentations.
  • 1998: Cecilia Rouse publishes a more correct analysis in the QJE. She compares those selected to attend a private school to those who were not selected (which I posited was the correct way, above), which avoids the potential bias in Peterson's work. She finds positive effects for math and no effect for reading.

There are similar papers for just about every other voucher program out there (Cleveland, Michigan, NYC, DC, Columbia, Chile, etc.), and papers that look at effects outside of math and reading scores (increase in years of schooling/graduation rates, college attendance rates, cohabitation/pregnancy rates, etc.), but these are topics for other posts.

The general consensus is that voucher students do slightly better (or in some cases no worse) on various measures of short term achievement than their public school counterparts.

But what about that other question: how do the public school students do when there are voucher programs? Because voucher programs are usually tied with the implementation of charter schools, this also raises the question of how competition affects education outcomes. If competition improves education all around, then the "no difference" effects we see in the above papers can be explained by the charter schools and threat of vouchers forcing public schools to improve and lifting all boats, so to speak. And this has sparked even hotter debates than the one cited above, in particular, in relation to Hoxby's "rivers" paper, but that's another post.

You don't want to read the papers? That's fine. You can watch the movie (it's also on Netflix). It's good and worth watching, but be warned: the movie is almost never as good as the book. "Waiting for Superman" focuses on the plight of a few select students, how problematic/prevalent failing schools really are, the difference between teachers and teachers' unions, and the lottery system. It is very light on actual numbers and why vouchers and charters actually would fix the problem of failing schools.

No comments:

Post a Comment