- cross-posted to:
- hackernews@lemmy.bestiver.se
- cross-posted to:
- hackernews@lemmy.bestiver.se
A Norwegian man said he was horrified to discover that ChatGPT outputs had falsely accused him of murdering his own children.
According to a complaint filed Thursday by European Union digital rights advocates Noyb, Arve Hjalmar Holmen decided to see what information ChatGPT might provide if a user searched his name. He was shocked when ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as “a convicted criminal who murdered two of his children and attempted to murder his third son,” a Noyb press release said.
It’s all hallucinations.
Some (many) just happen to be very close to factual.
It’s sad to see that the marketing of these tools has been so effective that few realize how they work and what they do.
It really is sad. I often hear, “I even asked ChatGPT and it said…” as if that means their response is valid. I’ve heard people say it who I thought would know better, too.
The number of times I’ve heard that by people expecting it to win them arguments is incredibly discouraging.
😎👉👉 zoop!
Seriously, you have no idea. I have spent some time delving into the current models, human psychology, neurology and evolution and how people engage with each other or other entities, and the problem is already worse than we realize, and it’s going to get so, so much worse, because our species has major vulnerabilities in our entire conscious experience, these things are going to reshape the way people engage with reality itself at some point and we should all be a lot more concerned and I’m an old man yelling on the street corner with a cardboard sign huh.
removed by mod
It doesn’t matter how it works. Is the output acceptable?
Sounds like no, and it’s the company’s problem to fix it
Ok hear me out: the output is all made up. In that context everything is acceptable as it’s just a reflection of the whole of the inputs.
Again, I think this stems from a misunderstanding of these systems. They’re not like a search engine (though, again, the companies would like you to believe that).
We can find the output offensive, off putting, gross , etc. but there is no real right and wrong with LLMs the way they are now. There is only statistical probability that a) we’ll understand the output and b) it approximates some currently held truth.
Put another way; LLMs convincingly imitate language - and therefore also convincing imitate facts. But it’s all facsimile.
Yes, the problem lies in companies marketing it as more than that, hence the company being sued right now