All, there is ALWAYS more bad generative AI news!
- Last week, 67 companies signed a "Pledge to America's Youth: Investing in AI education" with the Trump White House. If you think "Google, IBM, MagicSchool, Meta, Microsoft, NVIDIA and Varsity Tutors" and like are in this for the good of children and their education, I have an almost $900 billion annual budget to sell you.
These are not people you want coming up with ethics or rules of play or code of transparency around any of this. In fact, these are the same people and companies who wanted to ban regulation for ten years (luckily they lost).
Let's be super clear, that these companies are in it because they all have dollar signs in their eyes. - Benjamin Riley (are you reading him all the time yet?) has, as always, some good thinking on the attitude to adopt towards this (and so much, frankly): " AI researchers should be their own best skeptics."
- Futurism last week had a piece on people being involuntarily committed (yes, another one) due to "ChatGPT psychosis":
Dr. Joseph Pierre, a psychiatrist at the University of California, San Francisco who specializes in psychosis, told us that he's seen similar cases in his clinical practice.
After reviewing details of these cases and conversations between people in this story and ChatGPT, he agreed that what they were going through — even those with no history of serious mental illness — indeed appeared to be a form of delusional psychosis.
"I think it is an accurate term," said Pierre. "And I would specifically emphasize the delusional part."He goes on to say:
"What I think is so fascinating about this is how willing people are to put their trust in these chatbots in a way that they probably, or arguably, wouldn't with a human being," Pierre said. "And yet, there's something about these things — it has this sort of mythology that they're reliable and better than talking to people. And I think that's where part of the danger is: how much faith we put into these machines."*
Chatbots "are trying to placate you," Pierre added. "The LLMs are trying to just tell you what you want to hear."
We already are seeing massive harms in so many ways. Why are we letting this anywhere near children or their education at all?
_______________________________________________________________
*I would note that this is what this piece in Forbes misses in its promotion of possible positive uses.
No comments:
Post a Comment
Note that comments on this blog are moderated.