For the past two weeks, nearly every day brings at least one, if not more, article about the tech backlash in schools. An attempt at a roundup:
- NBC News shared an extensive piece on the backlash specifically about iReady, the math and reading assessment program that's now being sued for stealing student data. The reach is huge:
Nearly one-third of students from kindergarten to 12th grade nationwide use i-Ready, including in nine of the 10 largest school districts.
And the impact on student learning is not nearly as clear as one might like. I really recommend reading, especially if you're in a district that uses it.
A New York City specific parallel to the excellent piece I linked to last month by Jessica Winter in the New Yorker, New York Intelligencer's "Help! My Kindergarten Is All In on AI. “Are they taking over? Yeah, they’re taking over.”" which speaks not only to the depersonalization of apps teaching kids to read and practice math, but notes the "vast amount of data" that STMath*, for example, is collecting from every child. Never forget what it is actually about for the companies:
“These companies are literally mandated by their corporate charters to maximize profits in any way possible,” said Bridget Kessler, a mom of elementary-school-age children in Flatlands who also serves on her local community-education council. “So how can I trust that they have the best interests of my kids in mind?”
Earlier in the piece, by the way, is something I have found particularly insidious: the use of so-called "fellowships" to infiltrate school administrations:
Three years ago, Google partnered with the AI-focused investment firm Global Silicon Valley Ventures to “advance technology” in education — through programs like Amira, which GSV Ventures has a stake in. Top education administrators have joined the Google GSV Education Innovation Fellowship, including superintendent of Manhattan high schools Gary Beidleman and chief academic officer Miatheresa Pate, who is responsible for the city’s AI guidelines. In a policy document from the fellowship, Beidleman advises educators on how to get parents and teachers to accept AI: Find “early adopters and key messengers” to convert skeptics and achieve a “cultural shift” from within.
Such programs are not approved by school committees, despite being a commitment of time, and often no one checks if they align with the goals, policies, and values of the district.- Government Technology reports that three-quarters of those surveyed by PA Unplugged think students are spending too much time on screens in schools. The article looks at some specific responses in Pennsylvania.
Speaking of making money off of all of this, 404 Media reported that the OpenAI, Google, and Microsoft are all supporting the federal Literacy in Future Technologies Artificial Intelligence, or LIFT AI Act. They aren't doing that out of the best interest of students.
In terms specifically on AI, I thought this piece in Vanity Fair in which AI is appearing in TV shows and it is always a bad guy was really telling:
The Hacks team also explores the real-life messaging that the tech community has been using—that “AI is coming whether you like it or not,” a catchphrase that is heard around Hollywood on a near-daily basis. Ava (Hannah Einbinder) pushes back on that idea, criticizing this “forced inevitability.” “People like you are always saying that it’s happening whether you like it or not, but you are the ones making it happen,” she says. “You could easily stop it if people could say that they don’t want it—but you don’t give people a choice.”
Statsky says that aggressiveness from the tech community about AI’s inevitability is a red flag for her as a writer. “Sometimes, when something is being forced so hard, you can smell behind it that it’s not organic and it’s not natural,” she says. “If this was really the dream technology that they were pushing, well, then why are you needing to force it down our throats?”
- The family of one of the victims of last year's shooting at Florida State University is suing OpenAI, which the shooter used in planning the attack.
- And don't miss John Oliver:
“OpenAI knew this would happen. It’s happened before and it was only a matter of time before it happened again,” Joshi said in a Monday statement. “But they chose to put their profits over our safety and it killed my husband. They need to be responsible before another family has to go through this.”
Attorneys for Joshi also reiterated that “ChatGPT inflamed and encouraged Ikner’s delusions; endorsed his view that he was a sane and rational individual; helped convince him that violent acts can be required to bring about change,” adding that the software provided what he viewed as encouragement to “carry out a massacre, down to the detail of what time would be best to encounter the most traffic on campus.”
No comments:
Post a Comment