I don't do a great job of blogging my own comments in School Committee, so here's a write-up of my notes on tonight's vote. Typed up pre-meeting.
We aren't having the right conversation about assessment. The conversation we should be having is "how do we most accurately and authentically assess our students."
Instead, we're having a conversation about which of two poor assessment systems we have each school go with. I've spent twenty years speaking out on the failures of the MCAS to assesss our students accurately; I'm under no illusion that it's any sort of superior assessment system. While it's been lovely to hear this message from DESE this past year, they are doing so simply in defense of the PARCC.
I'd hoped that the PARCC would be an improvement. I spent a good bit of time running through their sample questions online, though, and discovered that not only did it have the technology glitches that I expected, it had some of the same lousy questions that I've seen on the MCAS: "find the main idea" in which the main idea was not a choice; "defend your answer with quotations from this list" in which none of the quotations were a reasonable defense.
Thus my first question, through the Chair to administration, is is the state or the PARCC consortium taking any sort of feedback on the items themselves?
We should be using this opportunity, rather than to figure out this "new" system, to ramp up our own authentic assessments. We have some great ones happening already: the Gateway projects that University Park does; the similar projects adopted Goddard Scholars; science fairs and book projects. Let's work with our schools to stress those assessments.
Likewise, schools that are going with PARCC are going to have two rounds of that test. Has, through the chair to administration, this been addressed with the schools in terms of other standardized assessments they're doing? Has the spring MAP been dropped? I'd like a report on the overlap and how it's being handled.
I recently read an article from the Texas Observer in which Professor Walter Stroup calculated that as much as 72% of what was assessed by the Texas STARR test was how well the kids could take the test: you could swap out questions and even subjects, and you'd get nearly identical results. I haven't seen anyone do anything quite like that with our tests, but a similar result wouldn't surprise me. One thing that is clearly going to be tested by this test is the facility with which our children can respond on computers. I know that there is a popular idea that this generation is that of digital natives. This is not as true as is believed. It also is a world away from Minecraft to typing a long composition on multiple sources of reading material. I would like us (motion) to send a letter to DESE, expressing our concern about the degree to which the use of technology is unwittingly being assessed. We really need to stop using tests that have these "hidden" assessments in them; they are not fair.
I'd also like to make a motion to lay some clear plans on getting our kids keyboarding as part of their schedules. If we're moving to online assessment (whatever it's called), we need to be sure our kids have every advantage we can give them on it.
No comments:
Post a Comment