It's past time to fess up, all. This emperor has no clothes. If you don't have someone like Andrew Lipsett of Woburn saying this in public:
Urging his colleagues to develop policies now to discourage the integration of such technology into classroom environments, the outgoing School Committee member, who is not seeking re-election in November, volunteered to jump-start that process by meeting with central office administrators to review how AI might be integrated into the curriculum.
“I tend to be a technology-positive person. I’m an early adopter of a lot things. But AI to me is a very dangerous road [to travel down] when it comes to education and I’m concerned about [sending] a message that there is a way for students to use it safely,” said the School Committee member, who is employed as a high school history teacher in Billerica.
“What I tend to see in my classroom is that AI has been used as a short-cut, a method of cheating,” he continued. “AI is something that may be useful for people who have advanced training, who are very comfortable writing, and who have done a lot of work in this area before. [But for school use], I think we’re opening pandora’s box and it’s very concerning for me as an educator to watch.”
...then go be that person yourself.
Why? Let's look at the most recent headlines:
"Google's 'homework help' is here but teachers say it enables cheating":
Pressing it launches Google Lens, a service that reads what’s on the page and can provide an “AI Overview” answer to questions — including during tests...
Chrome’s cheating tool exemplifies Big Tech’s continuing gold rush approach to AI: launch first, consider consequences later and let society clean up the mess.
If you've been following how AI 'works.' you perhaps won't be surprised to learn that the answers it provides are also not even accurate:
When he tested the homework help button on his own assessments, sometimes the answers it offered were good — and other times they were not. “Students really aren’t helped as much as they might think they are by using this, instead of resource guides written by specialists,” he says. (My own tests of AI-generated answers show they still have some major blind spots.)
Cheating to get the wrong answers is a new one. This is also one where it hasn't even been an option; it just appears:
Other instructors bristled at the fact Google was forcing AI into their classrooms. “How do educators have any real choice here about intentional use of AI when it is just being injected into educational environments without warning, without testing and without consultation?” says Eamon Costello, an associate professor at Dublin City University.
And why?
He says the covid-19 pandemic forced schools to digitize everything, and now they’re being forced to live-test generative AI on students. “We are caught up in an AI in education gold rush,” he says.
This of course has actual consequences, as The 74 covers in this week's "Another AI side effect: erosion of student-teacher trust" with the subheading "AI is exacerbating a feeling that since the pandemic, the classroom dynamic has grown transactional.":
When students can cheaply and easily outsource their work, he said, why value a teacher’s feedback? And when teachers, relying on sometimes unreliable AI-detection software, believe their students are taking such major shortcuts, the relationship erodes further.
And let's not put this all on the students: when teachers have AI create their lessons, their assessments, and their grading, then why should the students feel invested in an educational process that their teachers clearly don't care about enough to invest their own time, effort, and expertise in?
Teachers presuming students will use AI and using "detectors" undermines the relationship that teachers have with students, particularly as--surprise!--such tools are riddled with bias:
In an interview, Gorichanaz said instructors’ trust in AI detectors is a big problem. “That’s the tool that we’re being told is effective, and yet it’s creating this situation of mutual distrust and suspicion, and it makes nobody like each other. It’s like, ‘This is not a good environment.’”
For Gorichanaz, the biggest problem is that AI detectors simply aren’t that reliable — for one thing, they are more likely to flag the papers of English language learners as being written by AI, he said. In one Stanford University study from 2023, they “consistently” misclassified non-native English writing samples as AI-generated, while accurately identifying the provenance of writing samples by native English speakers.
“We know that there are these kinds of biases in the AI detectors,” Gorichanaz said. That potentially puts “a seed of doubt” in the instructor’s mind, when they should simply be using other ways to guide students’ writing. “So I think it’s worse than just not using them at all.”
Both ends of the interaction are undermining trust if we're using this crap, and it's up the adults to be the adults in the room here.
All of this around classrooms is playing out while OpenAI admits that hallucinations are not flaws, but are 'mathematically inevitable,' while the use of AI is ushering in a 'golden age of hacking' (which is particularly a concern given the vulnerabilities of U.S. schools to cybercrime), and the actual harm to children posed by generative AI.
That our Massachusetts Department of Elementary and Secondary Education continues to frame this in terms of content and "resources" really just demonstrates how poorly education is doing at managing this. People are not doing their research; they are not reading and reviewing enough; they are acting as if this is simply another technology to add to classrooms. The basic levels of critical thinking we hope to inculcate by elementary school isn't present in this discussion.
No comments:
Post a Comment