Over at The Washington Post's Answer Sheet blog, guest blogger and University of Virginia cognitive scientist Daniel Willingham unfurled, over the course of three weeks and three meaty posts, a proposal for how we might ascertain if the Common Core Standards, which have been adopted by 33 states, are improving student outcomes in the future.
In order to do so, he had to confront one truth about education policy:
Obviously schooling is complex, with a number of interacting factors that contribute to student outcomes. The critical point is that a problem in one part of the system might mask positive change in another part of the system, just as repairs to the electrical system of a car might appear to have no effect if the fuel system also needs repair.\n
So, he advocates the development of algorithms (not standardized tests, this time) that allow researchers to make predictions about an educational input qualitatively and quantitatively—program A is better than program B, by this X amount—as well as utilizing "epidemiological models" to study everything that goes on in classrooms, just as they are used to study the happenings in neighborhoods.
I'm probably doing a poor job of explaining his ideas, so I'd definitely recommend reading his posts. But, the take home for me is this: If we actually endeavor to study our education system in a manner where we can truly see the effectiveness of a certain initiative, it requires time horizons of a decade or so.
And that's a timeframe, Willingham rightly points out, that usually includes a politically motivated shifting of course (often in the form of a reauthorization of the Elementary and Secondary Education Act).