Can value-added be applied to teacher preparation? Scholars weigh in
Can the value-added method be used, fairly and reliably, to differentiate among teacher preparation programs? According to scholars who have studied the issue, this remains something of an open question.
...One of the papers, by the University of Washington's Dan Goldhaber and a colleague, found that most programs produced graduates who did no better or worse than those teachers trained out of state. Only a small number of institutions produced graduates that, on average, had a significant, positive effect on student math scores.
A study of teachers from various training programs in New York City, however, found some "meaningful" differences between different preparation routes, including both traditional and alternative routes. The paper, by five well-known teacher-quality researchers, also found that stronger oversight of the student-teaching experience seemed to produce better teachers.
A third paper, by Cory Koedel of the University of Missouri and three colleagues, sounds the most cautious note on applying value-added to teacher preparation. Their study found virtually no aggregate differences in the effectiveness of graduates of different teacher preparation programs in Missouri, using a value-added approach. The variation within programs was so great, the scholars found, that a good number of teachers from the lowest performing program likely would still outperform the average teacher from the highest-performing program.
...A final paper, primarily by researchers from the policy-evaluation nonprofit RAND, examines one of the conceptual challenges to this type of analysis: Programs are often geographically isolated, and therefore send many of their graduates to the same few schools. This makes apples-to-apples comparison of graduates difficult, since they often end up teaching in very different contexts.