In all talking about assessment of students, teachers, and principals surrounding both Senate Bill 2216 here in Massachusetts and Race to the Top, one thing we aren't hearing enough about is assessement instruments. So far, the default in Massachusetts--and it's largely unquestioned--is the MCAS exam.
But basing the entire structure of how we evaluate everyone on such an exam presupposes that the exam is a good instrument of evaluation, that it accurately tests skills and knowledge rather than anything else.
I've spoken of this before, but I bring it up now, because what is on the exam, and how it is presented, has everything to do with results. Take a look, for example, at this from New York's Regents exam.
Someone decides not only what is going to be on the test, but how it is going to be tested. Those decisions are not politically neutral. And, as those decisions about the test are made by the enormous testing companies that most benefit from the millions of dollars (are we up to billions yet? Probably) sent to them by states, are those decisions made on the basis of educational accuracy or on the basis of making more money?
Let's at least ask this question, shall we?
As someone who was employed for years, on a part time basis, to tutor and teach test prep (SAT, GRE, and -- actually -- MCAS, for one school over the course of a couple of months), I will attest to the fact that test-prep is a big money-making scheme. I will also say that I am very skeptical of most tests' ability to measure intelligence.
ReplyDelete