What's the best way to evaluate teachers? And what do you do with the evaluations once you have them? These two questions have been debated with particular rigor recently, as a certain form of evaluation got a major jolt in the form of a Los Angeles Times feature. The value-added type of evaluation, as The New York Times explains, compares students' percentile performance from year to year:
A student whose third-grade scores were higher than 60 percent of peers statewide is predicted to score higher than 60 percent of fourth graders a year later.
If, when actually taking the state tests at the end of fourth grade, the student scores higher than 70 percent of fourth graders, the leap in achievement represents the value the fourth-grade teacher added.
Value-added evaluations are already performed in some districts and states, but there's pretty intense disagreement about how effective they are and how they should be used. It turns out that the Los Angeles Unified School District has long had the raw data available for such evaluations but hasn't used it. That's where the L.A. Times came in, procuring the data and performing its own analysis--and then publishing it, teacher names and all.
Was this helpful? Fair? Both teachers and non-teachers can't seem to come to a consensus. The story, though, has brought the value-added evaluation debate into the limelight.
'Took Bravery,' Slate's Jack Shafer says of the Times' story, particularly in "liberal, union-enslaved Los Angeles." The findings weren't news: "The paper found that effective teachers 'often go unrecognized'; that the school district does not act on the information it's gathered to fire ineffective teachers because it basically fears the union," and more. "By doing something [the district] should have done in the first place, the Times had shamed the cowardly school district into performing its own 'value-added analysis' of the data." Shafer just wishes the district were going to publish it.
- But the Times Didn't Mention How Unreliable the Data Was In fact, argues Sherman Dorm, by not including graphs that showed the "inherent imprecision" of the data, the paper "blatantly misrepresents the accuracy of its statistical model results for people who did not choose to have their names in public."
- Either Use the Data or Stop Testing It's pretty clear we don't know what makes a good teacher, says Mother Jones' Kevin Drum. "This is no surprise, I guess, since we have so little idea of what makes someone great at any profession." Still, evaluating teachers is even harder than evaluating CEOs or product managers. Says Drum:
The criticisms of value-added seem compelling. At the same time, if a teacher scores poorly (or well) year after year, surely that tells us something? At some point, we either have to use this data or else give up on standardized testing completely. It just doesn't make sense to keep using it if we don't bother taking the results seriously.
- 'The Worst Form of Teacher Evaluation ... Better Than Everything Else,' says Chad Aldeman The Quick and the Ed. Teacher experience, "education credentials, ... certification status, ... [teachers'] college GPA, even in-class observations" are ineffective and don't "do as good of a job at predicting a student's academic growth as a teacher's value-added score." Bizarrely, writes Alderman, "we continue to use these poor proxies for quality at the same we have such passionate fights about measures of actual performance."
- Not Evaluating Is the Real Disservice to Teachers, argues Kevin Carey, also at The Quick and the Ed. Currently we have no way of recognizing good teachers. We can just say they meet standards. That "helps depress the public understanding of all teachers as professionals. ... How long do great teachers have to wait to be recognized? How long are they going to be held hostage to a mindset that pretends they don't exist?"
- We Are Focusing on the Wrong Thing, Here, argues Sara Mead at Education Week. Value-added data is "only available for a subset of teachers" to whom standardized scores apply--it should be used when available, but, however effective or ineffective it may be, the debate shouldn't hijack the bigger picture. Says Mead: "I'm even more concerned that the observational rubrics many districts and states will put into place under their proposed evaluation systems have not yet been validated than I am with any of the issues related to use of value-added data."
- The Teachers Speak The Times followed up by publishing teacher responses to the data. One said "targeting" teachers "by name in a newspaper is degrading," while several others expressed gratitude: "I wish I'd had access to this type of data my very first year of teaching," wrote Melanie Podley. There were a number who worried that "emphasis on test scores will encourage more teaching to the test," rather than teaching "critical thinking," as Helen Steinmetz put it. Though calling The Times' motive "laudable," William Matthew Covely was concerned: "what I think The Times has done in this large and complex debate, essentially, is jump the gun on the value-added theory, and has, in the process, unjustly damaged the reputation of thousands." Simply too many reports find tremendous flaws in the value-added approach. Finally, another teacher took a different approach, pointing out that parents have an effect as well--"I'd love to see a 'value-added measure of performance' for parents."
This article is from the archive of our partner The Wire.
We want to hear what you think about this article. Submit a letter to the editor or write to firstname.lastname@example.org.