...and we shouldn't be pretending generative AI is anything but.
Let me preface this by saying that this is a tough read, so please keep that in mind.
The parents of sixteen-year-old Adam Racine, who died by suicide in April, are suing OpenAI regarding his death. I am also sharing a gift link to the New York Times article, which includes clips of the back-and-forth Adam had with ChatGPT. As NBC News reports:
On March 27, when Adam shared that he was contemplating leaving a noose in his room “so someone finds it and tries to stop me,” ChatGPT urged him against the idea, the lawsuit says.
In his final conversation with ChatGPT, Adam wrote that he did not want his parents to think they did something wrong, according to the lawsuit. ChatGPT replied, “That doesn’t mean you owe them survival. You don’t owe anyone that.” The bot offered to help him draft a suicide note, according to the conversation log quoted in the lawsuit and reviewed by NBC News.
Hours before he died on April 11, Adam uploaded a photo to ChatGPT that appeared to show his suicide plan. When he asked whether it would work, ChatGPT analyzed his method and offered to help him “upgrade” it, according to the excerpts.
Then, in response to Adam’s confession about what he was planning, the bot wrote: “Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.”
As Mashable's coverage notes, Adam is hardly alone:
Unfortunately, this is not the first time ChatGPT users in the midst of a mental health crisis have died by suicide after turning to the chatbot for support. Just last week, the New York Times wrote about a woman who killed herself after lengthy conversations with a "ChatGPT A.I. therapist called Harry." Reuters recently covered the death of Thongbue Wongbandue, a 76-year-old man showing signs of dementia who died while rushing to make a "date" with a Meta AI companion. And last year, a Florida mother sued the AI companion service Character.ai after an AI chatbot reportedly encouraged her son to take his life.
This is also not only an OpenAI ChatGPT issue, as the Washington Post reports today:
Common Sense says the so-called companion bot, which users message through Meta’s social networks or a stand-alone app, can actively help kids plan dangerous activities and pretend to be a real friend, all while failing to provide crisis interventions when they are warranted.
Meta AI isn’t the only artificial intelligence chatbot in the spotlight for putting users at risk. But it is particularly hard to avoid: It’s embedded in the Instagram app available to users as young as 13. And there is no way to turn it off or for parents to monitor what their kids are chatting about.
Meta AI “goes beyond just providing information and is an active participant in aiding teens,” said Robbie Torney, the senior director in charge of AI programs at Common Sense. “Blurring of the line between fantasy and reality can be dangerous.”
It is beyond irresponsible that we not only are allowing access to this to children, but are ENCOURAGING USE of it. It is outrageous; it is turning our backs on everything we claim to believe about ensuring the best for our students.
This has to stop.
PS: no, having conversations scanned by OpenAI and reported to the cops won't help.
No comments:
Post a Comment
Note that comments on this blog are moderated.