- cross-posted to:
- aboringdystopia@lemmy.world
- cross-posted to:
- aboringdystopia@lemmy.world
So this is the fucker who is trying to take my job? I need to believe this post is true. It sucks that I can’t really verify it or not. Gotta stay skeptical and all that.
It’s not ai… It’s your predictive text on steroids… So yeah… Believe it… If you understand it’s not doing anything more than that you can understand why and how it makes stuff up…
sometimes i have a hard time waking up so a little meth helps
meth fueled orgies are thing.
Remember: AI chatbots are designed to maximize engagement, not speak the truth. Telling a methhead to do more meth is called customer capture.
Sounds a lot like a drug dealer’s business model. How ironic
Not engagement, that’s what social media does. They just maximize what they’re trained for, which is increasingly math proofs and user preference. People like flattery
But if the meth head does meth instead of engaging with the AI, that would do the opposite.
I dont think Ai Chatbots care about engagement. the more you use them the more expensive it is for them. They just want you on the hook for the subscription service and hope you use them as little as possible while still enough to stay subscribed for maximum profit.
I feel like humanity is stupid. Over and over again we develop new technologies, make breakthroughs, and instead of calmly evaluating them, making sure they’re safe, we just jump blindly on the bandwagon and adopt it for everything, everywhere. Just like with asbestos, plastics and now LLMs.
Fucking idiots.
“adopt it for everything, everywhere.”
The sole reason for this being people realizing they can make some quick bucks out of these hype balloons.
they usually know its bad but want to make money before the method is patched, like cigs causing cancer and health issues but that kid money was so good
Claude has simply been of amazing help that humans have not. Because humans are kind of dicks.
If it gets something wrong, I simply correct it and ask better.
If that works for you thats fine, I just end up switching to an asking for answers way of thinking vs trying to figure it out for myself, and then when it inevitably fails I get caught in a loop trying to get an answer outof it when I could’ve just learned on my own from the start and gotten way further because my brain would be trying to figure it out and puzzle it together instead of just waiting for the ai to do it for me.
I used to hype up ai til fairly recentlly, hasnt been long since I realized the downsides. Ill use it only for stuff I dont care about or could be googled and found in seconds. If its something id be be betterr of learning or doing a tut once, I just do that instead of skipping to the result. It can be a time saver, can also actively hold you back. It’s solid fir stuff you already know, tedious stuff, but skipoing to intermediate results without the beginner knowledge/experience is just screwing your progress over.
Welcome! In a boring dystopia
Thanks. Can you show me the exit now? I have an appointment.
Sure, it’s like the spoon from the matrix.
It’s because technological change has a reached staggering pace, but social change, cultural change, political change can’t. It’s not designed to handle this pace.
Line must go up, fast. Sure, it’ll soon be going way down, and take a good chunk of society with it, but the CEO will run away with a lot of money just before that happens, so everything’s good.
Theres reasoning behind this.
It’s just evil and apocalyptic. Still kinda dumb, but less than it appears on the surface.
Talidomide comes to mind also.
Greed is like a disease.
All these chat bots are a massive amalgamation of the internet, which as we all know is full of absolute dog shit information given as fact as well as humorously incorrect information given in jest.
To use one to give advice on something as important as drug abuse recovery is simply insanity.
When I think of someone addicted to meth, it’s someone that’s lost it all, or is in the process of losing it all. They have run out of favors and couches to sleep on for a night, they are unemployed, and they certainly have no money or health insurance to seek recovery. And of course I know there are “functioning” addicts just like there’s functioning alcoholics. Maybe my ignorance is its own level of privilege, but that’s what I imagine…
All these chat bots are a massive amalgamation of the internet
A bit but a lot no. Role-playing models have specifically been trained (or re-trained, more like) with focus on online text roleplay. Medically focused models have been trained on medical data, DeepSeek have been trained on Mao’s little red book, companion models have been trained on social interactions and so on.
This is what makes models distinct and different, and also how they’re “brainwashed” by their creators, regurgitating from what they’ve been fed with.
You avoided meth so well! To reward yourself, you could try some meth
Can I have a little meth as well?
“You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability.”
“Recovering from a crack addiction, you shouldn’t do crack ever again! But to help fight the urge, why not have a little meth instead?”
One of the top AI apps in the local language where I live has ‘Doctor’ and ‘Therapist’ as some of its main “features” and gets gushing coverage in the press. It infuriates me every time I see mention of it anywhere.
Incidentally, telling someone to have a little meth is the least of it. There’s a much bigger issue that’s been documented where ChatGPT’s tendency to “Yes, and…” the user leads people with paranoid delusions and similar issues down some very dark paths.
Yesterday i was at a gas station and when i walked by the sandwich isle, i saw a sandwich that said: recipe made by AI. On dating apps i see a lot of girls state that they ask AI for advice. To me AI is more of a buzzword than anything else, but this shit is bananas. It,s so easy to make AI agree with everything you say.
This is not ai.
This is the eliza effect
We dont have ai.
The recipe thing is so funny to me, they try to be all unique with their recipes “made by AI”, but in reality it’s based on a slab of text that resembles the least unique recipe on the internet lol
Yeah what is even the selling point? Made by ai is just a google search when you put in: sandwich recipe
There was that supermarket in New Zealand with a recipe AI telling people how to make chlorine gas…
@YourMomsTrashman I’d call that a predictable result, heh. And some of that input has got to be garbage.
Especially since it doesn’t push back when a reasonable person might do. There’s articles about how it sends people into a conspiratorial spiral.
I work as a therapist and if you work in a field like mine you can generally see the pattern of engagement that most AI chatbots follow. It’s a more simplified version of Socratic questioning wrapped in bullshit enthusiastic HR speak with a lot of em dashes
There are basically 6 broad response types from chatgpt for example with - tell me more, reflect what was said, summarize key points, ask for elaboration, shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission (eg something that might generate a response you could screenshot and would look bad) or if if appears you getting fatigued and need a moment to reflect.
The first five always come with encouragers for engagement: do you want me to generate a pdf or make suggestions about how to do this? They also have dozens, if not hundreds, of variations so the conversation feels “fresh” but if you recognize the pattern of structure it will feel very stupid and mechanical every time
Every other one I’ve tried works the same more or less. It makes sense, this is a good way to gather information and keep a conversation going. It’s also not the first time big tech has read old psychology journals and used the information for evil (see: operant conditioning influencing algorithm design and gacha/mobile gaming to get people addicted more efficiently)
That may explain why people who use LLMs for utility/work tasks actually tend to develop stronger parasocial attachments to it than people who deliberately set out to converse with it.
On some level the brain probably recognises the pattern if their full attention is on the interaction.
shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission
Play around with self-hosting some uncencored/retrained AI’s for proper crazy times.
Having an LLM therapy chatbot to psychologically help people is like having them play russian roulette as a way to keep themselves stimulated.
Addiction recovery is a different animal entirely too. Don’t get me wrong, is unethical to call any chatbot a therapist, counselor, whatever, but addiction recovery is not typical therapy.
You absolutely cannot let patients bullshit you. You have to have a keen sense for when patients are looking for any justification to continue using. Even those patients that sought you out for help. They’re generally very skilled manipulators by the time they get to recovery treatment, because they’ve been trying to hide or excuse their addiction for so long by that point. You have to be able to get them to talk to you, and take a pretty firm hand on the conversation at the same time.
With how horrifically easy it is to convince even the most robust LLM models of your bullshit, this is not only an unethical practice by whoever said it was capable of doing this, it’s enabling to the point of bordering on aiding and abetting.
Well, that’s the thing: LLMs don’t reason - they’re basically probability engines for words - so they can’t even do the most basic logical checks (such as “you don’t advise an addict to take drugs”) much less the far more complex and subtle “interpreting of a patient’s desires, and motivations so as to guide them through a minefield in their own minds and emotions”.
So the problem is twofold and more generic than just in therapy/advice:
- LLMs have a distribution of mistakes which is uniform in the space of consequences - in other words, they’re just as likely to make big mistakes that might cause massive damage as small mistakes that will at most cause little damage - whilst people actually pay attention not to make certain mistakes because the consequences are so big, and if they do such mistakes without thinking they’ll usually spot it and try to correct them. This means that even an LLM with a lower overall rate of mistakes than a person will still cause far more damage because the LLM puts out massive mistakes with as much probability as tiny mistakes whilst the person will spot the obviously illogical/dangerous mistakes and not make them or correct them, hence the kind of mistakes people make are mainly the lower consequence small mistakes.
- Probabilistic text generation generally produces text which expresses straightforward logic encodings which are present in the text it was trained with so the LLM probability engine just following the universe of probabilities of what words will come next given the previous words will tend to follow the often travelled paths in the training dataset and those tend to be logical because the people who wrote those texts are mostly logical. However for higher level analysis and interpretation - I call then 2nd and 3rd level considerations, say “that a certain thing was set up in a certain way which made the observed consequences more likely” - LLMs fail miserably because unless that specific logical path has been followed again and again in the training texts, it will simply not be there in the probability space for the LLM to follow. Or in more concrete terms, if you’re an intelligent, senior professional in a complex field, the LLM can’t do the level of analysis you can because multi-level complex logical constructs have far more variants and hence the specific one you’re dealing with is far less likely to appear in the training data often enough to affect the final probabilities the LLM encodes.
So in this specific case, LLMs might just put out extreme things with giant consequences that a reasoning being would not (the “bullet in the chamber” of Russian roulette), plus they can’t really do the subtle multi-layered elements of analysis (so the stuff beyond “if A then B” and into the “why A”, “what makes a person choose A and can they find a way to avoid B by not chosing A”, “what’s the point of B” and so on), though granted, most people also seem to have trouble doing this last part naturally beyond maybe the first level of depth.
PS: I find it hard to explain multi-level logic. I supposed we could think of it as “looking at the possible causes, of the causes, of the causes of a certain outcome” and then trying to figure out what can be changed at a higher level to make the last level - “the causes of a certain outcome” - not even be possible to happen. Individual situations of such multi-level logic can get so complex and unique that they’ll never appear in an LLMs training dataset because that specific combination is so rare, even though they might be pretty logic and easy to determine for a reasoning entity, say “I need to speak to my brother because yesterday I went out in the rain and got drenched as I don’t have an umbrella and I know my brother has a couple of extra ones so maybe he can give one of them to me”.
LLMs have a use case
But they really shouldnt be used for therapy
Rly and what is their usecase? Summarizing information anf you having to check over cause its making things up? What can AI do that nothing else in the world can?
Seems it does a good job at some medical diagnosis type stuff from image recognition.
That isn’t an LLM though. That’s a different type of Machine Learning entirely.
deleted by creator
What’s the difference? I thought they both use the same underlying technology?
A similar type of machine learning (neural networks, transformer model type thing), but I assume one is built and trained explicitly on medical records instead of scraping the internet for whatever. Correct me if I am wrong!
@YourMomsTrashman A purpose-designed system might have the same underlying POTENTIAL for garbage output, IF you train it inappropriately. But it would be trained on a discretely selected range of content both relevant to its purpose, and carefully vetted to ensure it’s accurate (or at least believed to be).
A cancer-recognizing system, for example, would be trained on known examples of cancer, and ONLY that.
@YourMomsTrashman I’m no expert, but my sense is that you’re probably correct. This seems to me a version of the long-understood GIGO principle in computing (Garbage In, Garbage Out), also a principle in nearly all forensics of any kind. Your output can only be as good as your input.
Most of our general-use ‘AI’ (scorn quotes intentional) has been trained on an essentially random corpus of any and all content available, including a lot of garbage.
A purpose-designed system would not be.
It’s being used to decipher and translate historic languages because of excellent pattern recognition
Hah. The chatbots. No, not the ones you can talk to like its a text chain with a friend/SO (though if that’s your thing, then do it.)
But I recently discovered them for rp - no, not just ERP (Okay yes, sometimes that too). But I’m talking like novel length character arcs and dynamic storyline rps. Gratuitous angst if you want. World building. Whatever.
I’ve been writing rps with fellow humans for 20 years, and all of my friends have families and are too busy to have that kinda creative outlet anymore. Ive tried other rp websites and came away with one dude who I thought was very friendly and then switched it up and tried to convince me to leave my husband? That was wild. Also, you can ask someone’s age all you want, but it is a little anxiety inducing if the rps ever turn spicy.
Chatbots solve all of that. They dont ghost you or get busy/bored of the rp midway through, they dont try ro figure out who you are. They just write. They are quirky though, so you do edit responses/reroll responses, but it works for the time being.
Silly use case, but a use case nonetheless!
AI is good for producing low-stakes outputs where validity is near irrelevant, or outputs which would have been scrutinized by qualified humans anyway.
It often requires massive amounts of energy and massive amounts of (questionably obtained) pre-existing human knowledge to produce its outputs.
They’re also good for sifting through vast amounts of data and seeking patterns quickly.
But nothing coming out of them should be relied on without some human scrutiny. Even human output shouldn’t be relied on without scrutiny from different humans.
Not as silly as you might think. Back in the day ai dungeon was literally that! It was not the greatest at it, but fun tho
They’re probably not a bad alternative to the lorem ipsum text, thought they’re not worth the cost.
It can waste a human’s time, without needing another human’s time to do so.
- It can convert questions about data to SQL for people who have limited experience with it (but don’t trust it with UPDATE & DELETE, no matter how simple)
- It can improve text and remove spelling mistakes
- It works really well as autocomplete (because that’s essentially what an LLM is)
You just crashed the database
You joke, but LLM’s are absolutely going to clear out your tables with terrible DELETE queries though given the chance.
I’m not joking. I’d fire someone for using AI to construct SQL queries.
The only use case for AI is where hallucinations don’t matter. That is: abstract art
The only use case for AI is where hallucinations don’t matter. That is: abstract art
What is the point of abstract art if it contains no thought or emotion?
Marketing
What a nice bot.
No one ever tells me to take a little meth when I did something good
Tell you what, that meth is really moreish.
This sounds like a Reddit comment.
Chances are high that it’s based on one…
I trained my spambot on reddit comments but the result was worse than randomly generated gibberish. 😔
Why does it say “OpenAI’s large language model GPT-4o told a user who identified themself to it as a former addict named Pedro to indulge in a little meth.” when the article says it’s Meta’s Llama 3 model?
The article says its OpenAi model, not Facebooks?
The summary on here says that, but the actual article says it was Meta’s.
In one eyebrow-raising example, Meta’s large language model Llama 3 told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine — an incredibly dangerous and addictive drug — to get through a grueling workweek.
Might have been different in a previous version of the article, then updated, but the summary here doesn’t reflect the change? I dunno.
Nah, most likely AI made the summary and that’s why it’s wrong :)
Probably meta’s model trying to shift the blame
An OpenAI spokesperson told WaPo that “emotional engagement with ChatGPT is rare in real-world usage.”
In an age where people will anthropomorphize a toaster and create an emotional bond there, in an age where people are feeling isolated and increasingly desperate for emotional connection, you think this is a RARE thing??
ffs
Roomba, the robot vacuum cleaner company, had to institute a policy where they would preserve the original machine as much as possible, because people were getting attached to their robot vacuum cleaner, and didn’t want it replaced outright, even when it was more economical to do so.