NEWS
GOOD PEOPLE
HISTORY
LIFE HACKS
THE PLANET
SCIENCE & TECH
POLITICS
WHOLESOME
WORK & MONEY
About Us Contact Us Privacy Policy
GOOD is part of GOOD Worldwide Inc.
publishing family.
© GOOD Worldwide Inc. All Rights Reserved.

Is Value-added Teacher Data Flawed?

Amidst the imbroglio kicked up by The Los Angeles Times series of articles on teacher effectiveness data comes the findings of a research paper authored by several prominent education experts and published by the non-partisan Economics Policy Institute. Its finding: It would be "unwise" to consider student improvement (or slides) in standardized tests as up to 50 percent of a teacher's evaluation, as some states are proposing to do.

Louisiana, Tennessee, and Washington, D.C., are weighting value-added data up to 50 percent. Other states, however, are not looking to depend that heavily on the controversial assessment. In response to the Times article, Los Angeles Unified School District Superintendent Ramon Cortines is shooting to use value-added assessments for 30 percent of a teacher's grade. A pilot program galvanized by The Gates Foundation in Tampa weights value-added data at 40 percent.


Maureen Downey over at the Atlanta Journal-Constitution's Get Schooled blog was among the first to point out the study questioning the emphasis of so-called "value-added data." Whereas, the study suggests that scores be limited to a small component of overall evaluations, Downey rightly notes that the idealized regimen these researchers suggest—which includes "observations or videotapes of classroom practice, teacher interviews, and artifacts such as lesson plans, assignments, and samples of student work"—is, frankly, financially unfeasible.

Over at The Washington Post's Answer Sheet blog, Valerie Strauss asks of EPI study coauthor and Duke University economist Helen Ladd: Why use standardized test data at all?

Ladd's response:

Test scores are unreliable, but they are still more often right than wrong, but not sufficiently more often to justify making high-stakes decisions on the basis of test scores alone. But giving test scores too much weight in a balanced evaluation system runs the additional danger of creating incentives to narrow the curriculum, as we described in the paper. If they are not given too much weight, this danger is lessened. How much weight they should be given should be a matter of local experimentation and judgment. All we say in the paper is that giving them 50 percent of the weight is too much.

\n

In addition to being "more often right than wrong," the scores are also essentially the sole objective component of teacher assessments. Thus, they certainly deserve some weight—though they shouldn't be the sole basis for personnel decisions.

And the place where that especially true is in L.A. As Slate's Jack Shafer writes, the L.A. Times was absolutely in the right publishing the data, as it's in the public domain and is the right of the people to see. It could, however, lead engaged parents to pull their kids from the classrooms of teachers who score badly, causing further non-randomization in the sort of students each teacher is assigned and possibly skewing future data: Poorer performing teachers will likely end up with poorer performing students (or at least students whose parents didn't make a decision about their child's schooling according to this data set).

That's why I'd side with Green Dot Public Schools CEO Marco Petruzzi, who wrote in response to the Times data dump:

I'm open to publicly grading schools, and I'm also in favor of transparency within a school community, where you can set the data within the right context. I'm uncomfortable, however, with publishing a narrow data point with a person's name attached to it for public consumption without the proper context.

\n

The data is valuable. But, it's best utilized by administrators and the teachers themselves, who can then work to better their staffs or steer members of them out of the profession, if need be.

Photo via Houston Independent School District.

More Stories on Good