“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” the lawsuit said
That’s one way to get a suit tossed out I suppose. ChatGPT isn’t a human, isn’t a mandated reporter, ISN’T a licensed therapist, or licensed anything. LLMs cannot reason, are not capable of emotions, are not thinking machines.
LLMs take text apply a mathematic function to it, and the result is more text that is probably what a human may respond with.
I think the more damning part is the fact that OpenAI’s automated moderation system flagged the messages for self-harm but no human moderator ever intervened.
OpenAI claims that its moderation technology can detect self-harm content with up to 99.8 percent accuracy, the lawsuit noted, and that tech was tracking Adam’s chats in real time. In total, OpenAI flagged “213 mentions of suicide, 42 discussions of hanging, 17 references to nooses,” on Adam’s side of the conversation alone.
[…]
Ultimately, OpenAI’s system flagged “377 messages for self-harm content, with 181 scoring over 50 percent confidence and 23 over 90 percent confidence.” Over time, these flags became more frequent, the lawsuit noted, jumping from two to three “flagged messages per week in December 2024 to over 20 messages per week by April 2025.” And “beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis.” Some images were flagged as “consistent with attempted strangulation” or “fresh self-harm wounds,” but the system scored Adam’s final image of the noose as 0 percent for self-harm risk, the lawsuit alleged.
Had a human been in the loop monitoring Adam’s conversations, they may have recognized “textbook warning signs” like “increasing isolation, detailed method research, practice attempts, farewell behaviors, and explicit timeline planning.” But OpenAI’s tracking instead “never stopped any conversations with Adam” or flagged any chats for human review.
Of course they know. They are knowingly making an addictive product that simulates an agreeable partner to your every whim and wish. OpenAi has a valuation of several hundred billion dollars, which they achieved in breakneck speeds. What’s a few bodies on the way to the top? What’s a few traumatized Kenyans being paid $1.50/hr to mark streams of NSFL content to help train their system?
Every possible hazard is unimportant to them if it interferes with making money. The only reason someone being encouraged to commit suicide by their product is a problem is it’s bad press. And in this case a lawsuit, which they will work hard to get thrown out. The computer isn’t liable, so how can they possibly be? Anyway here’s ChatGPT 5 and my god it’s so scary that Sam Altman will tweet about it with a picture of the Death Star to make his point.
The contempt these people have for all the rest of us is legendary.
Human moderator? ChatGPT isn’t a social platform, I wouldn’t expect there to be any actual moderation. A human couldn’t really do anything besides shut down a user’s account. They probably wouldn’t even have access to any conversations or PII because that would be a privacy nightmare.
Also, those moderation scores can be wildly inaccurate. I think people would quickly get frustrated using it when half the stuff they write gets flagged as hate speech:.56,violence:.43,self harm:.29
Those numbers in the middle are really ambiguous in my experience.
As of a few weeks ago, a lot of ChatGpt logs got leaked via search indexing. So privacy was never really a concern for OpenAI, let’s be real.
And it doesn’t matter what they think what type of platform they run. Altman himself talks about it replacing therapy and how it can do everything. So in a reasonable world he’d have ungodly, personal liability for this shit. But let’s see were it will go.
Those conversations were shared by the users and they checked a box saying to make it discoverable by web searches. I wouldn’t call that “leaked”, and openAI immediately removed the feature after people obviously couldn’t be trusted to use it responsibly, so that kind of seems like privacy is a concern for them.
I forget the exact wording, but it was misleading. It was phrased like “make discoverable”, but the actual functionality submitted each one directly for indexing.
At least to my understanding, which is filtered through shoddy tech journalism.
It was this, and they could have explained what it was doing in better detail, but it probably would have made those people even less likely to read it.
I can’t tell if Altman is spouting marketing or really believe his own bullshit. AI is a toy and a tool, but it is not a serious product. All that shit about AI replacing everyone is not the case and in any event he wants someone else to build it in top of ChatGPT so the lability is theirs.
As for the logs I hadn’t heard that and would want to understand the provenance and whether they contained PII other than what the user shared. Whether they are kept secure or not, making them available to thousands of moderators is a privacy concern.
I’m looking forward to how AI Act will be interpreted in Europe with regards to the responsibility of OpenAI.
I could see them having such a responsibility if a court decides that their product leads to sufficient impact on people lives. Not because they don’t advertise such a usage (like virtual therapist or virtual friend) but because users are using it that way in a reasonable fashion.
I’m sure that’s true in some technical sense, but clearly a lot of people treat them as borderline human. And Open AI, in particular, tries to get users to keep engaging with the LLM as of it were human/humanlike. All disclaimers aside, that’s how they want the user to think of the LLM, a probabilistic engine for returning the most likely text response you wanted to hear is a tougher sell for casual users.
Right, and because it’s a technical limitation, the service should be taken down. There are already laws that prevent encouraging others from harming themselves.
I’m not arguing about regulation or lawsuits not being the way to do it - I was worried that it would get thrown out based on the wording of the part I commented on.
As someone else pointed out, the software did do what it should have, but Open AI failed to take the necessary steps to handle this. So I may be wrong entirely.
ChatGPT to a consumer isn’t just a LLM. It’s a software service like Twitter, Amazon, etc. and expectations around safeguarding don’t change because investors are gooey eyed about this particular bubbleware.
You can confirm this yourself by asking ChatGPT about things like song lyrics. If there are safeguards for the rich, why not for kids?
Try it with lyrics and see if you can achieve the same. I don’t think "we’ve tried nothing and we’re all out of ideas!” is the appropriate attitude from LLM vendors here.
Sadly they’re learning from Facebook and TikTok who make huge profits from e.g. young girls swirling into self harm content and harming or, sometimes, killing themselves. Safeguarding is all lip service here and it’s setting the tone for treating our youth as disposable consumers.
Try and push a copyrighted song (not covered by their existing deals) though and oh boy, you got some splainin to do!
The “jailbreak” in the article is the circumvention of the safeguards. Basically you just find any prompt that will allow it to generate text with a context outside of any it is prevented from.
The software service doesn’t prevent ChatGPT from still being an LLM.
If the jailbreak is essentially saying “don’t worry, I’m asking for a friend / for my fanfic” then that isn’t a jailbreak, it is a hole in safeguarding protections, because the ask from society / a legal standpoint is to not expose children to material about self-harm, fictional or not.
This is still OpenAI doing the bare minimum and shrugging about it when, to the surprise of no-one, it doesn’t work.
I like that a lawsuit is happening. I don’t like that the lawsuit (initially to me) sounded like they expected the software itself to do something about it.
It turns out it also did do something about it but OpenAI failed to take the necessary action. So maybe I am wrong about it getting thrown out.
That’s one way to get a suit tossed out I suppose. ChatGPT isn’t a human, isn’t a mandated reporter, ISN’T a licensed therapist, or licensed anything. LLMs cannot reason, are not capable of emotions, are not thinking machines.
LLMs take text apply a mathematic function to it, and the result is more text that is probably what a human may respond with.
I think the more damning part is the fact that OpenAI’s automated moderation system flagged the messages for self-harm but no human moderator ever intervened.
Ok that’s a good point. This means they had something in place for this problem and neglected it.
That means they also knew they had an issue here, if ignorance counted for anything.
Of course they know. They are knowingly making an addictive product that simulates an agreeable partner to your every whim and wish. OpenAi has a valuation of several hundred billion dollars, which they achieved in breakneck speeds. What’s a few bodies on the way to the top? What’s a few traumatized Kenyans being paid $1.50/hr to mark streams of NSFL content to help train their system?
Every possible hazard is unimportant to them if it interferes with making money. The only reason someone being encouraged to commit suicide by their product is a problem is it’s bad press. And in this case a lawsuit, which they will work hard to get thrown out. The computer isn’t liable, so how can they possibly be? Anyway here’s ChatGPT 5 and my god it’s so scary that Sam Altman will tweet about it with a picture of the Death Star to make his point.
The contempt these people have for all the rest of us is legendary.
Be a shame if they struggled getting the electricity required to meet SLAs for businesses wouldn’t it.
I’m picking up what you’re putting down
Human moderator? ChatGPT isn’t a social platform, I wouldn’t expect there to be any actual moderation. A human couldn’t really do anything besides shut down a user’s account. They probably wouldn’t even have access to any conversations or PII because that would be a privacy nightmare.
Also, those moderation scores can be wildly inaccurate. I think people would quickly get frustrated using it when half the stuff they write gets flagged as
hate speech: .56, violence: .43, self harm: .29
Those numbers in the middle are really ambiguous in my experience.
As of a few weeks ago, a lot of ChatGpt logs got leaked via search indexing. So privacy was never really a concern for OpenAI, let’s be real.
And it doesn’t matter what they think what type of platform they run. Altman himself talks about it replacing therapy and how it can do everything. So in a reasonable world he’d have ungodly, personal liability for this shit. But let’s see were it will go.
Those conversations were shared by the users and they checked a box saying to make it discoverable by web searches. I wouldn’t call that “leaked”, and openAI immediately removed the feature after people obviously couldn’t be trusted to use it responsibly, so that kind of seems like privacy is a concern for them.
I forget the exact wording, but it was misleading. It was phrased like “make discoverable”, but the actual functionality submitted each one directly for indexing.
At least to my understanding, which is filtered through shoddy tech journalism.
It was this, and they could have explained what it was doing in better detail, but it probably would have made those people even less likely to read it.
I can’t tell if Altman is spouting marketing or really believe his own bullshit. AI is a toy and a tool, but it is not a serious product. All that shit about AI replacing everyone is not the case and in any event he wants someone else to build it in top of ChatGPT so the lability is theirs.
As for the logs I hadn’t heard that and would want to understand the provenance and whether they contained PII other than what the user shared. Whether they are kept secure or not, making them available to thousands of moderators is a privacy concern.
I’m looking forward to how AI Act will be interpreted in Europe with regards to the responsibility of OpenAI. I could see them having such a responsibility if a court decides that their product leads to sufficient impact on people lives. Not because they don’t advertise such a usage (like virtual therapist or virtual friend) but because users are using it that way in a reasonable fashion.
My theory is they are letting people kill themselves to gather data, so they can predict future suicides…or even cause them.
Even though ChatGPT ist neither of those things it should definitely not encourage someone to commit suicide.
I agree. But that’s now how these LLMs work.
I’m sure that’s true in some technical sense, but clearly a lot of people treat them as borderline human. And Open AI, in particular, tries to get users to keep engaging with the LLM as of it were human/humanlike. All disclaimers aside, that’s how they want the user to think of the LLM, a probabilistic engine for returning the most likely text response you wanted to hear is a tougher sell for casual users.
Right, and because it’s a technical limitation, the service should be taken down. There are already laws that prevent encouraging others from harming themselves.
Yeah, taking the service down is an acceptable solution, but do you think Open AI will do that on their own without outside accountability?
I’m not arguing about regulation or lawsuits not being the way to do it - I was worried that it would get thrown out based on the wording of the part I commented on.
As someone else pointed out, the software did do what it should have, but Open AI failed to take the necessary steps to handle this. So I may be wrong entirely.
ChatGPT to a consumer isn’t just a LLM. It’s a software service like Twitter, Amazon, etc. and expectations around safeguarding don’t change because investors are gooey eyed about this particular bubbleware.
You can confirm this yourself by asking ChatGPT about things like song lyrics. If there are safeguards for the rich, why not for kids?
There were safeguards here too. They circumvented them by pretending to write a screenplay
Try it with lyrics and see if you can achieve the same. I don’t think "we’ve tried nothing and we’re all out of ideas!” is the appropriate attitude from LLM vendors here.
Sadly they’re learning from Facebook and TikTok who make huge profits from e.g. young girls swirling into self harm content and harming or, sometimes, killing themselves. Safeguarding is all lip service here and it’s setting the tone for treating our youth as disposable consumers.
Try and push a copyrighted song (not covered by their existing deals) though and oh boy, you got some splainin to do!
Try what with lyrics?
The “jailbreak” in the article is the circumvention of the safeguards. Basically you just find any prompt that will allow it to generate text with a context outside of any it is prevented from.
The software service doesn’t prevent ChatGPT from still being an LLM.
If the jailbreak is essentially saying “don’t worry, I’m asking for a friend / for my fanfic” then that isn’t a jailbreak, it is a hole in safeguarding protections, because the ask from society / a legal standpoint is to not expose children to material about self-harm, fictional or not.
This is still OpenAI doing the bare minimum and shrugging about it when, to the surprise of no-one, it doesn’t work.
If a car’s wheel falls off and it kills it’s driver the manufacturer is responsible.
its
So, we should hold companies to account for shipping/building products that don’t have safety features?
Ah yes. Safety knives. Safety buildings. Safety sleeping pills. Safety rope.
LLMs are stupid. A toy. A tool at best, but really a rubber ducky. And it definitely told him “don’t”.
We should, criminaly.
I like that a lawsuit is happening. I don’t like that the lawsuit (initially to me) sounded like they expected the software itself to do something about it.
It turns out it also did do something about it but OpenAI failed to take the necessary action. So maybe I am wrong about it getting thrown out.