And this week, you don't want to miss:
- This piece in the New York Times on people 1whose interactions with generative AI sent them down conspiratorial rabbit holes:
Generative A.I. chatbots are “giant masses of inscrutable numbers,” Mr. Yudkowsky said, and the companies making them don’t know exactly why they behave the way that they do. This potentially makes this problem a hard one to solve. “Some tiny fraction of the population is the most susceptible to being shoved around by A.I.,” Mr. Yudkowsky said, and they are the ones sending “crank emails” about the discoveries they’re making with chatbots. But, he noted, there may be other people “being driven more quietly insane in other ways.”
That's a gift link above, so do read the whole thing. Benjamin Riley relates a similar situation closer at hand in his post here.
On the ongoing theme of "this is doing the opposite of what you want to do to people you're educating," we have this piece of research from Cornell which came out last week, which finds:
Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.
That's in addition to use of generative AI being linked to erosion of critical thinking skills, memory loss and procrastination.
___________________
1 as the first individual quoted started by using it for "to make financial spreadsheets and to get legal advice," given what we know, he was already in trouble.
1 as the first individual quoted started by using it for "to make financial spreadsheets and to get legal advice," given what we know, he was already in trouble.
No comments:
Post a Comment