Yes, friends, there is ALWAYS bad news in AI education!
First up, Benjamin Riley is as frustrated as I am about AI in education and this nonsense "moonshot" notion, in a post that I urge you to read from first to last.
Some schools have been using AI surveillance software, and it not only is triggering false alarms; it has resulted in not only student discipline, but also students being arrested.
And yesterday, we had another release of research which tells us the same things we find every time we ask:
Asked to generate intervention plans for struggling students, AI teacher assistants recommended more-punitive measures for hypothetical students with Black-coded names and more supportive approaches for students the platforms perceived as white, a new study shows...
Common Sense Media found that while these tools could help teachers save time and streamline routine paperwork, AI-generated content could also promote bias in lesson planning and classroom management recommendations.Robbie Torney, senior director of AI programs at Common Sense Media, said the problems identified in the study are serious enough that ed tech companies should consider removing tools for behavior intervention plans until they can improve them. That’s significant because writing intervention plans of various sorts is a relatively common way teachers use AI.
The report from Common Sense Media can be found here.
I will echo Riley's post then, here:
Add to that all of the generative AI systems.

No comments:
Post a Comment