Wednesday, January 14, 2026

AI risks outweigh benefits in schools

Rhetoric arguing that technology adoption in and of itself represents innovation and progress is not only false but undermines society’s ability to discern how to effectively harness AI to advance children’s education.

...though I would dispute that the last is possible.

Headline from NPR this morning: "The risks of AI in schools outweigh the benefits, report says" in one that probably landed on your screen. It's from a Brookings release, and while the full report is over 200 pages (add it to your "to be read" list), the summary (from which the above quote is taken) is certainly digestible.

I will say that I strongly disagree with the conclusion they draw--that the problem somehow is fixable--as, at least from what I have read so far, they seem not to be recognizing the actual destruction generative AI is built on, even before a child lays a finger on a screen or keyboard. As the National Education Policy Center noted in their review of the report:

By positioning logic models as value-neutral, the report overlooks how such approaches ignore contextual complexity and the potential for unintended harms. Rather than offering critical guidance for assessing AI’s role in education, the report provides methodological cover for predetermined conclusions about AI’s inevitability and desirability.

There are other issues with their conclusions as well: 

By reducing time spent on numerous teaching-related tasks, AI allows teachers to focus on individualized student attention and enhance curriculum and instruction. 

What are these "teaching-related tasks" that can be outsourced, though? We don't want AI designing lessons, as that's the professional competency and individualizing of instruction that we hire professionals for. We don't want it grading materials turned in, as feedback to students is how they learn. Communicating with families, giving feedback on curricula...these are not things that can be relegated to an auto-complete function.

And also, the most recent research says that generative AI systems are not capable

The best-performing AI system successfully completed only 2.5 percent of the projects, according to the research team from Scale AI, a start-up that provides data to AI developers, and the Center for AI Safety, a nonprofit that works to understand risks from AI.

“Current models are not close to being able to automate real jobs in the economy,” said Jason Hausenloy, one of the researchers on the Remote Labor Index study. They created the index to give policymakers clear-eyed information about the capabilities of AI systems, he said.

 It helps teachers create more objective and targeted types of assessments that reduce bias while more accurately measuring students’ knowledge, skills, and aptitudes.

...save of course that we have evidence that it does none of those things.

 AI can empower student learning by providing access to otherwise unavailable learning opportunities and presenting content in ways that are more engaging and accessible, particularly for students with disabilities, neurodivergent learners, and multilingual learners.

...save, again, that we have evidence that it does none of those things. And note that these 'more engaging and accessible' lessons are being directed at our most vulnerable learners, and not two sentences after it was argued that AI was going to free up the teacher to do more of that. Which is it?

No comments: