Tuesday, January 7, 2014

Cutting through the lingo in the ESEA waiver survey

It's been brought to my attention that if you tried the ESEA waiver survey to which I directed you yesterday, you may well have found yourself over your head in education lingo. Let's see if we can cut through some of that!
First of all, the survey is part of an effort to get some public input on Massachusetts' application for an extension of our No Child Left Behind waiver. You probably remember that every school in the country was supposed to be 100% proficient by 2014? That'd be this spring. As everyone recognized that this wasn't going to happen, but no one was able to get things together enough to fix it (and they didn't all say to ignore it), the federal department of education granted waivers from this law if the states met certain conditions (which bore a remarkable similarity to Race to the Top).
Anyway, that's what's going on here.
The chart which is the second question is tackling two things, both of which have to do with MCAS scores:
  • proficiency gaps are (as they say) the space between every kid getting to proficient on the MCAS and where they are now. This is measured for all kids, but it's also for subgroups of kids: kids on IEP plans, kids who are learning English, and so forth. The deal Massachusetts made with the state was that we would half the MCAS score gap for everyone by 2017. The districts and schools with the largest proficiency gaps are poor and in cities.
  • accountability levels are the "Level 1" to "Level 5" grades (so to speak) that schools and districts have been assigned. For schools grades 3-8, it's based on slicing and dicing the MCAS scores a couple of ways--but it's all MCAS. For high schools, it also includes dropout rates and graduation rates. This is a new system--it came in under Race to the Top--as the old one was "everyone gets to proficiency by 2017." And "best student outcomes"? In the suburbs, largely wealthy ones, largely white ones.
The next bit talks about "student growth." This is, again, how much a student's MCAS scores have gone up from one year to the next; it is assumed that this is entirely within the control of the school and the teacher, who are those being held "accountable" for it.
Student subgroups are what we were talking about before: all the kids who are on IEPs, all the kids who are learning English...and how much difference there is in their MCAS scores versus last year. Likewise, dropout and graduation rates are as above.

For section 6, they're looking at the fairness of the leveling system (which is great, though it'd be better if they looked at whether we should have it all at). Right now, if the Creamer Center were considered a separate school, it would get a level, even though it handles entirely kids who are at risk. Essentially, the current system penalizes that school for the very reason for its existence. They're also talking about having to get students through grade 9 as a measure of success.
The last two in this section are on the new PARCC exam. The Governor (and others) promised that they would only switch if the new test is better than MCAS. The question on this is kind of impossible to disagree with--is anyone going to say "no, you should definitely go for the test which is worse?"--but it presupposes that the MCAS is a good test to begin with. The second part is the trick we've been talking about with PARCC: MCAS, and the levels it creates, still counts, even as we switch to a new test. While the default answer appears to be (much like on the above question) "of COURSE we should keep schools accountable," that isn't really quite what's going on here.
Section 7 is a sort of "give us a feel for the people who you talk to" question, which one supposes means we should be careful who that is. Each question asks if parents are "aware" of an issue and then if they "support" the issue; in other words, do they know about it? And do they like it? The district accountability system is "how are you bringing up MCAS scores," and "support" for that is going to probably read as support for the whole system. Likewise, do they know that we're switching tests, and do they support trying them out this spring? Do they know we're evaluating teachers differently and do they support that?
Then, the fun part: comment section. If there's a character limit, I know I didn't hit it, so type away!

And let me know if you ran into anything further you didn't get, please. Also, take it this week! It ends Friday.

No comments: