NEWS
GOOD PEOPLE
HISTORY
LIFE HACKS
THE PLANET
SCIENCE & TECH
POLITICS
WHOLESOME
WORK & MONEY
About Us Contact Us Privacy Policy
GOOD is part of GOOD Worldwide Inc.
publishing family.
© GOOD Worldwide Inc. All Rights Reserved.

Is Education Reform Effective? Depends on the Definition.

Too many education solutions fall apart when you step back and ask some tough questions.


Here’s the dilemma for people who write about education: Certain critical principles need to be mentioned again and again because policymakers persist in ignoring them, yet faithful readers eventually tire of the repetition.

Consider, for example, the reminder that schooling isn’t necessarily better just because it’s more “rigorous.” Or that standardized test results are such a misleading indicator of teaching or learning that raising scores can actually lower the quality of students’ education. Or that using rewards or punishments to control students inevitably backfires in multiple ways.


Education policymakers have turned a blind eye to solid evidence supporting each of these points for decades, yet still call their work "reform." Hence the dilemma: Will explaining in yet another book, article, or blog post why their premises are dead wrong have any effect other than eliciting grumbles that the author is starting to sound like a broken record?

Another axiom that has been offered many times to no apparent effect is that it means very little to say that a given intervention is “effective”—at least until we’ve asked “Effective at what?” and determined that the criterion in question is meaningful. Lots of educators cheerfully declare that they don’t care about theories; they just want something that works. But this begs the (unavoidably theoretical) question: What do you mean by “works”?

Once you’ve asked that, you’re obligated to remain skeptical about simple-minded demands for evidence, data, or research-based policies. At its best, research can only show us that doing A has a reasonably good chance of producing result B. It can’t tell us whether B is a good idea, and we’re less likely to talk about that if the details of B aren’t even clearly spelled out.

To wit: Several studies demonstrate the effectiveness of certain classroom management strategies, most of which require the teacher to exercise firm control from the first day of school. But how many readers of this research, including teacher educators and their students, interrupt the lengthy discussion of those strategies to ask what exactly is meant by “effectiveness”?

The answer, it turns out, is generally some variation on compliance. If a teacher does this, this, and this, it's more likely that his or her students will do whatever they’re told. Make that explicit, and we must ask whether compliance is really the paramount goal. If, on reflection, a teacher decides that it’s most important for students to become critical thinkers, enthusiastic learners, ethical decision-makers, or generous and responsible members of a democratic community, then the basic finding—and all the evidence behind it—is worth very little. Indeed, it may turn out that proven classroom management techniques designed to elicit obedience actually undermine the realization of more ambitious goals.

An even more common example of this general point concerns academic outcomes. In scholarly journals, media coverage, professional development workshops, any number of techniques are described as more or less beneficial—again, with scant attention paid to the outcome. The discussion of “promising results” is admirably precise about what steps achieved the results, while swiftly passing over the fact that those results consist of nothing more than scores on standardized tests.

We’re back, then, to one of those aforementioned key principles that are so often ignored. Standardized tests tend to measure what matters least about intellectual proficiency, so it makes absolutely no sense to judge curricula, teaching strategies, or the quality of educators or schools on the results of those tests. Indeed, as I’ve reported elsewhere, test scores have actually been shown to be inversely related to deep thinking.

Thus, “evidence” may demonstrate beyond a doubt that a certain teaching strategy is effective, but it isn’t until you remember to press for the working definition of effectiveness that you realize the teaching strategy (and all the impressive-sounding data that support it) are worthless because there’s no evidence that it improves learning instead of just test scores.

Which leads me to a report published earlier this year in the Journal of Educational Psychology. A group of researchers at the City University of New York and Kingston University in London performed two meta-analyses, statistically combining studies to quantify the overall result. The title of the article was “Does Discovery-Based Instruction Enhance Learning?”a question of interest to many educators.

The first review, of 580 comparisons from 108 studies, showed that unassisted discovery learning is less effective than “explicit teaching methods.” The second review, of 360 comparisons from 56 studies, showed that various “enhanced” forms of discovery learning work best of all. In other words, students learn better when they are guided by a teacher.

There are many possible responses to this news, ranging from “Duh" to “Tell me more about those enhanced forms, and which of them is most effective" to "How much more effective are we talking about?” since a statistically significant difference can be functionally meaningless if the effect size is low.

I took my own advice and asked “What the hell did all those researchers, whose cooking was tossed into a single giant pot, mean by ‘effective’?” It’s astonishing how little this crucial definition appeared to matter to the review’s authors. There was no discussion of what effectiveness means in the article’s lengthy introduction or in the concluding discussion section. There wasn’t a word to describe, let alone analyze, what all the researchers were looking for. Did they want to see how these different types of instruction affect kids' scores on tests of basic recall? Their ability to generalize principles to novel problems? Their creativity? (There’s no point in wondering about the impact on kids’ interest in learning—that almost never figures in these studies.)

Papers like this one are peer-reviewed and, as was the case here, are often sent back for revision based on reviewers' comments. Yet apparently no one thought to ask these authors to take a step back and consider what kind of educational outcomes are really at stake when comparing different instructional strategies.

In fact, the desired outcome in education studies is often quite superficial, consisting only of standardized test scores or a metric like number of items taught that were correctly recalled. And if one of these studies makes it into the popular press, an examination of desired outcomes probably won’t. In January I wrote about widespread media coverage of a study that supposedly proved one should, to quote The New York Times article, “Take a Test to Really Learn, Research Suggests.” One had to carefully read the study itself to discover that “really learn” just meant “cram more facts into your short-term memory.”

But the problem isn’t just an over-reliance on outcome measures—rote recall, test scores, or obedience—that some of us regard as shrug-worthy and a distraction from the intellectual and moral characteristics that should be occupying us instead. The problem is that researchers are, as a journalist might put it, burying the lede. And too many educators don’t seem to notice.

Photo via ameshistoricalsociety.org


More Stories on Good