Pages

Tuesday, May 13, 2014

VAM is a scam, but that doesn't stop its implementation.....

It should be obvious by now to anyone paying attention that the value added model (VAM) of teacher evaluation is junk science, and that the only reason to use it in any evaluation system is to be punitive toward teachers. Apparently, however, neither the Department of Education nor the Ohio legislature (among others) has been paying attention as they are both hell-bent on implementing unreliable, punitive, immoral VAM systems that could lead to groundless teacher firings. Lawsuits are being filed and contemplated across the nation as a result. This is a topic with which everyone should be familiar. The following is from U.S. News and World Report.

 

Report Finds Weak Link Between Value-Added Measures and Teacher Instruction

States should have more time to examine the quality of the measures, the report says.


The value-added model of measuring teacher performance can have weak or nonexistent relationships with the content and quality of teachers' instruction, a new report finds.
By
 
A spreading method of teacher performance that places significant importance on student growth measures has a weak to nonexistent link with teacher performance, according to new research published Tuesday. 
Morgan Polikoff and Andrew Porter, two education experts, analyzed the relationships between "value-added model" (VAM) measures of teacher performance and the content or quality of teachers' instruction by evaluating data from 327 fourth and eighth grade math and English teachers in six school districts. The weak relationships made them question whether the data would be useful in evaluating teachers or improving classroom instruction, the report says.
[READ: Following Lawsuit, Florida Releases Teacher Evaluation Scores]
"Conceptually, folks think that measure of both content and quality should be predicting student achievement growth," says Polikoff, an assistant professor of education at the University of Southern California. "In fact, value-added scores weren't really systematically related to either our content measure, or to the pedagogical quality measure." 

Porter, who co-authored the report with Polikoff, is the dean of the University of Pennsylvania's Graduate School of Education.
The value-added model, which is in place in about 30 states, attempts to measure a teacher's contribution to student academic growth by comparing the test scores of an individual teacher's students to the same students' scores from past years, as well as to other students in the same grade, and can account for up to half of a teacher's entire evaluation score in some states, such as Ohio. 
Many states are implementing new teacher evaluation systems that place a greater emphasis on student growth measures because the Obama administration has required them to do so if they want to keep their waivers from No Child Left Behind. In fact, Education Secretary Arne Duncan in April revoked Washington's waiver because its legislature failed to implement a teacher evaluation system that met the federal requirements. The waiver requirements stipulate that states should have these systems in place for the 2014-15 school year and used to influence personnel decisions by the following year.
[RELATED: States Need to Connect Teacher Evaluations to Other Quality Measures, Report Says]
But the Department of Education on Friday sent updated guidance to state education chiefs, saying it would grant some states extensions on their waivers even if their teacher evaluation systems aren't yet acceptable, Education Week first reported.
That flexibility could give states more time to more intensely study the relationship between value-added measures and teacher performance, Polikoff says.
"If I had my druthers, I would say we need to slow way down the implementation of these teacher evaluation systems because we just don’t know enough about the quality of these measures," Polikoff says. "And we have reason to believe a lot of the measures actually aren't very good quality."
Some previous research has shown stronger correlations between value-added measures and teacher instruction, while others have shown almost no relationships, he says. 
"It's not clear to me what the reasons are for those differences, but as these systems are rolling out, states need to really study these relationships and think about in the cases where the correlations are really low, what can you do with those data?" Polikoff says.
[ALSO: Connecticut Cautiously Optimistic of Teacher Evaluation System]
It's particularly important as 44 states and the District of Columbia are implementing the Common Core State Standards, and new state assessments aligned to those standards. But the fact that student scores are expected to drop on the Common Core-aligned tests, combined with the fact that teacher evaluations can influence personnel decisions, has drawn sharp criticism from teachers unions and other organizations. 
In April, the American Statistical Association issued a statement criticizing the use of value-added model, saying teachers account for between 1 and 14 percent of the variability in student test scores. 
"Ranking teachers by their VAM scores can have unintended consequences that reduce quality," the statement said. "This is not saying that teachers have little effect on students, but that variation among teachers accounts for a small part of the variation in scores. The majority of the variation in test scores is attributable to factors outside of the teacher’s control such as student and family background, poverty, curriculum, and unmeasured influences."
[MORE: States Improve Policies Tied to Teacher Effectiveness, Report Says]
Still, Polikoff says many states use a value-added model that does not take into account certain student characteristics. Rather than lumping several different dimensions of teacher quality into one index, Polikoff says each should be taken on its own to get a more holistic picture of teacher performance. 
"We have this kind of fetish of both for teachers and for schools putting everything into one index," Polikoff says. "Certainly there's some value to that – it makes things clear and if you want to make a policy decision you can just set a cut score. But I think actually you lose a lot of useful information when you do something like that."

No comments:

Post a Comment