Friday, July 9, 2010

There's three kinds of lies...

sorry, obvious title, I know...

The news out of Texas, where the state testing is done by none other than Pearson, has been rather alarming lately. It seems that there's been dramatic improvement in the results of their state tests and the school scoring that accompanies it. Texas State Representative Scott Hochberg has held a hearing regarding it, closing questioning the state's associate commissioner for assessment, accountability and data quality Chriss Cloudt over the results:

Hochberg asked them what accounted for the huge increase in the number of schools and school districts rated as "recognized" and "exemplary" in 2009.

Jones said he couldn't "intelligently answer that question," but Cloudt jumped in.

"Yes, I can," she said. "Performance."

She elaborated that the percentage of schools and districts rated in the top two categories had gone from the teens to the 60s because the state "defined a body of knowledge that students must learn and demonstrate knowledge of, your testing program measures that content and what you want to see is increases in performance on that test over time."

'Projected' numbers

Hochberg appeared skeptical. He noted that the number of school districts given the top rating of "exemplary" based on TAKS scores had risen from 43 in 2008 to 117 in 2009.

He also noted that 73 of the 74 additional "exemplary" districts used the Texas Projection Measure to attain that distinction.

There was an effort this year to count kids who improved, rather than just those who had passed, with this somewhat startling result:

After a couple of examples in which a school got to count a student as "passing" with depressingly low scores, Hochberg asked Cloudt and an associate to see how many correct answers a fourth-grader with barely passing math and reading scores at Benavidez Elementary in Houston needed to be counted as "passing" the writing test.

The unbelievable answer Hochberg had reached himself was confirmed by Cloudt: The child needed zero correct answers for his or her teachers and administrators to get credit for his or her "improvement."

There's also been some question on the growth model used:

Cloudt said the Texas Projection Measure is a "growth measure." To most of us, that would imply that it looked at how a child did this year compared to last.

Hochberg brought out that it doesn't. It looked only at last year's scores and, based on a formula devised from thousands of prior results, projected that children who pass reading or math were likely to pass other tests in future years.

It seems, however, that it wasn't intended to be used the way that the state has been using it, even by Pearson:

Cloudt continued to defend the projections, saying repeatedly that when a failing child was counted as passing it was because "hundreds and hundreds" of other children whose test scores fit the exact same pattern later passed.

But again, Hochberg was ready. He called as a witness an expert from Pearson, the national testing company that devised the Texas Projection Measure.

She explained that the formula used to "project" future success was not made by looking at the records of earlier kids with identical scoring patterns. It was based, again, on aggregate numbers that included the highest and lowest performing students as well as those in the middle.

Reporter Rick Casey has another article in the Houston Chronicle on this coming on Sunday.

No comments: