That’s what I love about LLMs. They aren’t intelligent. They’re just really good at recognizing patterns. That’s why objective facts are always presented correctly. Most of the pattern points at the truth. To avoid this, they will have to add specific prompts to lie about this exact scenario. The next similar fact, they’ll have to manually code around that one too. LLMs are very good at finding the overwhelming truth.
Are you having this argument on the principle of defending the undergrounded-ness of bands, or do you actually believe LLMs always get the facts straight?
Eh, more of an exercise in scientific skepticism. It’s, possible that an obscure band with that name was mentioned deep in some training data that’s not going to come up in a search. LLMs certainly hallucinate, but not always.
But not 100%. And the things they hallucinate can be very subtle. That’s the problem.
If they are asked about a band that does not exist, to be useful they should be saying “I’m sorry, I know nothing about this”. Instead they MAKE UP A BAND, ITS MEMBERSHIP, ITS DISCOGRAPHY, etc. etc. etc.
But sure, let’s play your game.
All of the information on Infected Rain is out there, including their lyrics. So is all of the information on Jim Thirwell’s various “Foetus” projects. Including lyrics.
Yet ChatGPT, DeepSeek, and Claude will all three hallucinate tracks, or misattribute them, or hallucinate lyrics that don’t exist to show parallels in the respective bands’ musical themes.
So there’s your objective facts, readily available, that LLMbeciles are still completely and utterly fucking useless for.
So they’re useless if you ask about things that don’t exist and will hallucinate them into existence on your screen.
And they’re useless if you ask about things that do exist, hallucinating attributes that don’t exist onto them.
They. Are. Fucking. Useless.
That people are looking at these things and saying “wow, this is so accurate” terrifies the living fuck out of me because it means I’m surrounded not by idiots, but by zombies. Literally thoughtless mobile creatures.
Sounds like you haven’t tried an LLM in at least a year.
They have greatly improved since they were released. Their hallucinations have diminished to close to nothing. Maybe you should try that same question again this time. I guarantee you will not get the same result.
Their hallucinations have diminished to close to nothing.
Are you sure you’re not an AI, 'cause you’re hallucinating something fierce right here boy-o?
Actual research, as in not “random credulous techbrodude fanboi on the Internet” says exactly the opposite: that the most recent models hallucinate more.
Wow. LLM shills just really can’t cope with reality can they.
Go to one of your “reasoning” models. Ask a question. Record the answer. Then, and here’s the key, ask it to explain its reasoning. It churns out a pretty plausible-sounding pile of bullshit. (That’s what LLMbeciles are good at, after all.) But here’s the key (and this is the key that separates the critical thinker from the credulous): ask it again. Not even in a new session. Ask it again to explain its reasoning. Do this ten times. Count the number of different explanations it gives for its “reasoning”. Count the number of mutually incompatible lines of “reasoning” it gives.
Then, for the piece de resistance, ask it to explain how its reasoning model works. Then ask it again. And again.
It’s really not hard to spot the bullshit machine in action if you’re not a credulous ignoramus.
I asked ChatGPT to describe the abandoned railway line between Åkersberga and Rimbo, it responded with a list of stations and descriptions and explained the lack of photos and limited information as due to the stations being small and only open for a short while.
My explanation is that there has never been a railway line between Åkersberga and Rimbo directly, and that ChatGPT was just lying.
it’s not lying, because it doesn’t know truth. it just knows that text like that is statistically likely to be followed by text like this. any assumptions made by the prompt (e.g. there is an old railway line) are just taken at face value.
I use ChatGPT once a day or so. Yeah, it’s damned good at simple facts, more than lemmy will ever admit. Yeah, it’ll easily make shit up if there’s no answer to be had.
We should have started teaching tech literacy and objective analysis 20-years ago. FFS, by 2000 I had figured out that, “If it sounds like bullshit, it likely is. Look more.”
Also, after that post, I’m surprised this site hasn’t taken you out back and done an ol’ Yeller on ya. :)
they can’t even get their stupid toys they made to play along.
That’s what I love about LLMs. They aren’t intelligent. They’re just really good at recognizing patterns. That’s why objective facts are always presented correctly. Most of the pattern points at the truth. To avoid this, they will have to add specific prompts to lie about this exact scenario. The next similar fact, they’ll have to manually code around that one too. LLMs are very good at finding the overwhelming truth.
Here’s me looking at the hallucinated discography of a band that never existed and nodding along.
Maybe they’re just way underground and you’ve never heard of them
I made the band up to see if LLMbeciles could spot that this is not a real band.
Feel free to look up the band 凤凰血, though, and tell me how “underground” it is.
Does this count?
Also, by nature of being underground they would be difficult to look up. Some bands have no media presence, not even a Bandcamp or a SoundCloud.
Nope.
You can tell because they’re not even in the same writing system. Future tip there.
Why would that matter? Band names are frequently translated and transliterated.
Dude. Go be reply guy somehwere else. You bore the fuck out of me.
Are you having this argument on the principle of defending the undergrounded-ness of bands, or do you actually believe LLMs always get the facts straight?
Eh, more of an exercise in scientific skepticism. It’s, possible that an obscure band with that name was mentioned deep in some training data that’s not going to come up in a search. LLMs certainly hallucinate, but not always.
An obscure band with that name that has a discography that nobody’s ever heard of anywhere, complete with band member names, track titles, etc?
Yeah, pull the other one, Sparky. It plays “Jingle Bells”.
There are no objective facts about a band that never existed, that is the point.
Ask them about things that do have enough overwhelming information, and you will see it will be much more correct.
But not 100%. And the things they hallucinate can be very subtle. That’s the problem.
If they are asked about a band that does not exist, to be useful they should be saying “I’m sorry, I know nothing about this”. Instead they MAKE UP A BAND, ITS MEMBERSHIP, ITS DISCOGRAPHY, etc. etc. etc.
But sure, let’s play your game.
All of the information on Infected Rain is out there, including their lyrics. So is all of the information on Jim Thirwell’s various “Foetus” projects. Including lyrics.
Yet ChatGPT, DeepSeek, and Claude will all three hallucinate tracks, or misattribute them, or hallucinate lyrics that don’t exist to show parallels in the respective bands’ musical themes.
So there’s your objective facts, readily available, that LLMbeciles are still completely and utterly fucking useless for.
So they’re useless if you ask about things that don’t exist and will hallucinate them into existence on your screen.
And they’re useless if you ask about things that do exist, hallucinating attributes that don’t exist onto them.
They. Are. Fucking. Useless.
That people are looking at these things and saying “wow, this is so accurate” terrifies the living fuck out of me because it means I’m surrounded not by idiots, but by zombies. Literally thoughtless mobile creatures.
Sounds like you haven’t tried an LLM in at least a year.
They have greatly improved since they were released. Their hallucinations have diminished to close to nothing. Maybe you should try that same question again this time. I guarantee you will not get the same result.
Are you sure you’re not an AI, 'cause you’re hallucinating something fierce right here boy-o?
Actual research, as in not “random credulous techbrodude fanboi on the Internet” says exactly the opposite: that the most recent models hallucinate more.
Only when switching to more open reasoning models with more features. With non-reasoning models the decline is steady.
https://research.aimultiple.com/ai-hallucination/
But I guess that nuance is lost on people like you who pretend AI killed their grandma and ate their dog.
Wow. LLM shills just really can’t cope with reality can they.
Go to one of your “reasoning” models. Ask a question. Record the answer. Then, and here’s the key, ask it to explain its reasoning. It churns out a pretty plausible-sounding pile of bullshit. (That’s what LLMbeciles are good at, after all.) But here’s the key (and this is the key that separates the critical thinker from the credulous): ask it again. Not even in a new session. Ask it again to explain its reasoning. Do this ten times. Count the number of different explanations it gives for its “reasoning”. Count the number of mutually incompatible lines of “reasoning” it gives.
Then, for the piece de resistance, ask it to explain how its reasoning model works. Then ask it again. And again.
It’s really not hard to spot the bullshit machine in action if you’re not a credulous ignoramus.
I asked ChatGPT to describe the abandoned railway line between Åkersberga and Rimbo, it responded with a list of stations and descriptions and explained the lack of photos and limited information as due to the stations being small and only open for a short while.
My explanation is that there has never been a railway line between Åkersberga and Rimbo directly, and that ChatGPT was just lying.
Claude’s reply:
Perfectly accurate!
it’s not lying, because it doesn’t know truth. it just knows that text like that is statistically likely to be followed by text like this. any assumptions made by the prompt (e.g. there is an old railway line) are just taken at face value.
also, since there has indeed been a railway connection between them, just not direct, that may have been part of the assumption.
I expected it to talk about the actual railway, not invent a fantasy line
I use ChatGPT once a day or so. Yeah, it’s damned good at simple facts, more than lemmy will ever admit. Yeah, it’ll easily make shit up if there’s no answer to be had.
We should have started teaching tech literacy and objective analysis 20-years ago. FFS, by 2000 I had figured out that, “If it sounds like bullshit, it likely is. Look more.”
Also, after that post, I’m surprised this site hasn’t taken you out back and done an ol’ Yeller on ya. :)
I was told we weren’t allowed to do that anymore
Of course you were told that.