Tuesday, December 21, 2010

Compounding the error

You may have caught that the Massachusetts Teachers Association has today proposed that teachers be in part evaluated based on their students' tests scores. (The full brief is here; as you might guess, it's significantly more complicated than that headline.)
On one hand, this isn't so surprising. As part of the Race to the Top application, Massachusetts had to agree to tie teacher evaluation to student test scores in some way. Thus, when the MTA signed on to RTTT, they were signing on to this.
On quite the other hand, there's a big difference between "we're going to go along with this bad idea" and "oh, let us please be the first to sign up!" If anyone in all of this should know exactly what the MCAS (or, insert standardized test here) does and does not evaluate, it's classroom teachers. There's a conversation to be had here (and teachers, I hope you're having it) of the remove that state union leadership has from the classroom by virtue of the union being their full time jobs. It's outrageous that the union leadership would see this as somehow (as the MTA president was quoted in the Globe this morning as saying) not "protecting bad teachers." You can be a lousy teacher and have great test scores; you can be a great teacher with lousy test scores. The two do not correlate. No one knows better than teachers exactly how much one student's fabulous progress and another's lack thereof have to do with that classroom year. To pretend otherwise is nonsensical and ignorant.
Further, you've just exacerbated the influence of Campbell's Law: the more you use any single measure as an indicator, the more you corrupt the indicator. (For more on Campbell's Law, see here.) In other words, if you want the MCAS to mean anything, you have to stop depending on it to mean everything. Originally, we were going to measure how districts were doing. That's a big enough picture where it might be a little difficult to play with. You probably get a lot of teaching to test, you corrupt the curriculum, but you're not getting case-by-case dabbling. Then we started evaluating schools with it...and individual students. Now you're going to add teachers to that. If you're a data geek, or even just someone who wants this number to mean something, you should be outraged that they're going to corrupt your numbers this way.
Unfortunately, we appear to be utterly lacking anyone in educational leadership who understands any of this.

2 comments:

  1. Tracy,

    I completely agree - I think you should write an Op-ed to the Telegram.

    I was trying to explain to my wife why, the Feds gave the State money through RTTT, and how Boylston did NOT vote for RTTT money, but yet still have to deal with following the regultions of RTTT, which now I guess includes the MTA agreeing to have teacher performance measured by student testing?!?

    Doesn't seem fair at all. If I'm a teacher - I want all the smart kids that test well, and forget about a chess or music program, because it isn't tested.

    I think you could write an excellent op-ed explaining the pros-cons of taking Federal Handouts, and keeping education control local.

    ReplyDelete
  2. Thanks, Brad. I can't write in until Jan. 11, as I'm in a 60-days-between-letters period.

    ReplyDelete

Note that comments on this blog are moderated.