You may remember that part of the new Massachusetts teacher evaluation system is a "Student Impact Rating," which, while based on multiple measures, by state law (enacted due to Race to the Top) includes student test scores.
But which scores? From which test?
Addressing this issue--I'm not sure I'd say 'answering this questsion'--is this memo from Commissioner Chester.
They're going to use two years of data: 2014-15 and 2015-16. If I'm understanding the memo correctly, if the 2014-15 PARCC scores (should your district be using them) don't agree with the 2013-14 MCAS scores and 2015-16 scores of whatever test we're using at that time, they'll throw those out (or rather, have the districts do so, because the state isn't doing this work), and they'll use the 2013-14 MCAS scores.
There are several statements in this memo about all of this aligning being a "strong indicator of student learning." No, no, it's not. Student growth percentiles is not a legitmate way to evaluate teachers (it's actually worse than value added). Please allow me to direct you to Bruce Baker on why not.
Also, this is based not on one, but on two entirely different tests (possibly! We don't actually even know what we're using for the second of these years!), which makes this all that much more crazy in asserting that these are meaningful comparisons that should have real value.
I am sure that there will be more on this to come, as this information starts to get around.
No comments:
Post a Comment
Note that comments on this blog are moderated.