Tuesday, September 16, 2025

Is there bad news on AI in education this week?

 THERE IS ALWAYS BAD NEWS ON AI IN EDUCATION!

  • Okay, but first some good news: students are pushing back.

  • Can teachers set up "simulated students" to trust in figuring out what practice or assessments to give their students? 
    “We were interested in finding out whether we can actually trust the models when we try to simulate any specific types of students. What we are showing is that the answer is in many cases, no,” said Ekaterina Kochmar, co-author of the study and an assistant professor of natural-language processing at the Mohamed bin Zayed University of Artificial Intelligence in the United Arab Emirates, the first university dedicated entirely to AI research.
    NO! Somewhat darkly amusingly, it's because of how they work:
    The LLMs that underlie AI tools do not think but generate the most likely next word in a given context based on massive pools of training data, which might include real test items, state standards, and transcripts of lessons. By and large, Kochmar said, the models are trained to favor correct answers.
    “In any context, for any task, [LLMs] are actually much more strongly primed to answer it correctly,” Kochmar said. “That’s why it’s very difficult to force them to answer anything incorrectly. And we’re asking them to not only answer incorrectly but fall in a particular pattern—and then it becomes even harder.”

     



No comments: