Thursday, October 17, 2013

Value-added and Student Growth models still a mess

I haven't posted in some time about the new teacher evaluation method we're rolling out. While the goal setting and classroom observation pieces are happening now, Massachusetts has not yet moved to the linking of teacher evaluation to student test scores.
That's still scheduled to happen, though, which is why I'd urge you to read Bruce Baker on what a mess these methods are. 
The argument I've repeatedly heard in Massachusetts--including from the floor of our own School Committee--is that we're not doing value-added measurement, we're doing student growth measurement. Essentially, that means that we see where a kid is at the beginning of the year and see how far that kid has come over the course of the time with the teacher. While this sounds nice and clean, it turns out that it's even more of a mess than value-added.
But don't believe me: go read Professor Baker:
At this point, I’m increasingly of the opinion that even if there was a possible reasonable use of value-added and growth data for better understanding variations in schooling and classroom effects on measured learning, I no longer have any confidence that these reasonable uses can occur in the  current policy environment.

No comments: