Friday, January 9, 2026

What are we doing here, Worcester?


 I am of course posting this after the Worcester School Committee meeting at which this was discussed. This is partly because this isn't intended as anything other than my thoughts, but it's also rather been a bleak week. It's a hard week to pull together the through line to write. 

I haven't been paying a lot of attention to Worcester School Committee agendas in recent months, but the T&G headline "Worcester schools set goals for this school year. How are they doing?" caught my eye earlier this week.

Contrary to the impression left by the article, the goals being discussed are not ones set by the superintendent for the school district, but are the goals set by the School Committee for the superintendent. In taking a quick skim through them, though, I can understand why this was misunderstood, as there doesn't seem to be much here that is about the superintendent directly. While goals for a superintendent nearly always do tackle the work of the district (and particularly in one the size of Worcester!), generally at least the professional practice goal is about the work of the superintendent as an individual educator, and how they are improving their own professional practice.1

In any case, these are the goals the School Committee set--evaluators in Massachusetts set the goals for educators being evaluated--and this report is the formative assessment of the superintendent evaluation cycle. The assessment is "formative" is because it's literally still being formed; this is the chance the educator being evaluated to talk through where they're at, and for the evaluator (in this case, the School Committee) to give feedback. They can even, if warranted, shift goals at this point in the cycle. 

While I have a few thoughts on some of the other goals2 which I won't go further into here, it was the second to last slide that brought me up short:

To be clear, there is not a goal set by the School Committee for the superintendent on AI. There is also not a district goal on AI set by the School Committee--who sets those, too--under which this would fall. 
There is also, for what it's worth, no set of state regulations or state standards set by a body--here, it would be the Board of Elementary and Secondary Education--on AI. It is not required; it is not advised in any capacity by a body that has the authority to do so.3

So what are we doing here?

This hits particularly hard in light this past week of the hard spin we saw from Grok, which is the AI bot on Twitter, in creating, as the UK's Internet Watch Foundation has documented, child abuse material and, as Wired among others covered, other graphic sexual content in ways that are difficult to even grapple with. Today, this function has been limited to those paying for Twitter
But it isn't, of course, just there; you might remember the girl who was suspended from school in Louisiana after boys created content about her and she retaliated physically. 

But of course, and ongoingly, that is not the only damage: 
In its most recent State of the Youth report, Aura found four things in AI use in those in the study: 
  1. violence is common;
  2. kids are growing up fast (and not in a good way);
  3. kids can't unplug from digital stress;
  4. the tech is causing rifts at home


As has been ongoingly noted, the enormous harm it is doing to vulnerable people should be enough to take these products off the market. Whether one cites how quickly the Tylenol scare of the 80's took it off shelves or the years we spent taking our shoes off before boarding planes, "ensure no harm" should be the standard when it comes to our health.

It seems clear to me that this "must have AI in schools" is based, fundamentally, on fear. We have to be ahead; we can't be left behind. But as Anna Luis Fernandez writes in her piece published in Rethinking Schools "Resisting AI Mania in Schools" :

When a teacher resists jumping on the AI-in-education bandwagon, they are not being timid or out of touch. When they plan their lessons and grade papers without AI, they are not wasting time. When they don’t teach students how to use it, they are not being irresponsible. By focusing not on their students’ ability to use generative AI but on their students’ ability to be generative and thus thrive in a world that can sustain them, they are absolutely thinking about the future.
Of course, at the same time, we continue to see how worthless the product is for so much of what it is being touted for. A few weeks ago, Rolling Stone reported on the growing AI hallucinated studies that are being cited in real academic papers, something we saw in the Trump administration's report on children's health released back in May4. Its use seems to make doctors less skilled at spotting cancer, which is just the sort of thing often cited. Recently, the National Weather Service, being pushed to use AI to make weather predictions, had to take down a forecast for Idaho when it had made-up towns.

via the Washington Post
Towns, of course, are something that the program literally just needs to copy off the map.

I could go on and on and on (and have, if you follow the blog), but it is not getting better; it's doing more and more damage. As Audrey Watters says in her newsletter today
...it really does boggle my mind there are still those who insist that they can wrest "AI" into "doing good," as if technofascism can readily be reshaped for any sort of truly "generative" purpose, as if one hundred years of teaching machines has brought us anywhere other than, to borrow from B. F. Skinner, a world now truly spiraling "beyond freedom and dignity."

And she links the Grok news to AI in general: 

 ...the proliferation of “undressing” technology should remind us that the lack of consent is a fundamental element of "AI" – data and content taken without our permission, text and images "generated" without our permission, algorithmic decision-making without our permission, that little sparkly "AI" icon forced into our everyday software and thus everyday lives without our permission.  

 We are owed, here and elsewhere, a much more fundamental conversation about what we're doing in schools with this. It must be based, not on fear of falling behind, but on a clear-eyed view of what it's costing us.5

_______________________________________________

1And that no one thought a first year superintendent whose professional career has been focused entirely on the finance and operations side should have a professional practice goal focused around academics seems an oversight to me.
2among them that the Committee is not going to be able to evaluate this year on school redistricting, and multi-year goals can't be set for new superintendents...
3the Department did create "Massachusetts Guidance for Artificial Intelligence in Schools" which was released this past August, but it's lack of critique of the field is embodied by the closing of the "Welcoming and Acknowledgements" page: "Together, we are moving forward to support safe, ethical, and equitable integration of AI in education focused on enhanced educational outcomes and opportunities for all students across Massachusetts." This is not the critical look on which such work needs to be founded.
And more to the point here, this is in no way a requirement of districts.
4'though I wouldn't call that an academic paper
5One thing I haven't included here, which frankly I struggle to write about, is how disappointing--and frankly that's a weak word for how much of a gut punch it is--to see organizations and people whose good judgement I've depended abandon critical thinking and research on this. I am partway through a letter to ASBO's president, as they late last month announced yet another AI-empowered effort for school finance. It is difficult for me to think of anything more fundamentally opposed to the integrity, honesty, clear thinking, and logic required of those who manage money. We have school leaders who profess to care for children ignoring weekly evidence of how incredibly damaging this is for them. 
I haven't been this disappointed in so many since COVID. 

No comments: