Commissioner Chester shared with the Board of Ed what's being presented as a case study of educator evaluation in Massachusetts. You can find the press release on it here.
I don't know what the parameters are of a "case study," to be fair, but the overwhelming factor driving this one is who they talked to, and, more to the point, who they didn't. A quick scan of the footnotes gives a count of four teachers and two principals, all of whom are used for pull-out quotes in the report. Discussions with those most involved in teacher evaluation, thus, appear simply not to have happened. They did spend a lot of time talking to the Department.
Now, I realize that this is being presented as a sort of counter narrative to the various states that threw lots of weight specifically on test scores as part of Race to the Top. I'm all for counter narratives--and more importantly, other options--on that. If we're going to really look at other options on educator evaluation, though, it is important to look at how this in fact functions on the ground.
That isn't the picture that is presented by CAP's report. The report presents a sort of vision of how the system is supposed to work.
A review of how it is working--and how it isn't working--would be valuable, particularly if we're going to present this as an alternative option. The need (off the top of my head) for more time and more space and more staffing for thoughtful conversations about teaching; the technological needs of getting this together; the ways in which the tension between evaluation for possible dismissal versus for improvement can be worked out; the level of trust necessary for any of this to work...these are good issues to raise and to be dealt with (and those are just off the top of my head; I'm sure there are others!). They aren't raised here, though, where largely the picture is rosy. And that's a missed opportunity.