You should probably mention that this is an article from 7 months ago.
Has anything changed?
No, they already stole everything, so there’s nothing left they can use to train and improve further.
Yes, it kept improving
[Citation needed]
If anything the LLMs have gotten less useful and started hallucinating even more obviously now.
7 months ago: https://web.archive.org/web/20241210232635/https://openlm.ai/chatbot-arena/ Now: https://web.archive.org/web/20250602092229/https://openlm.ai/chatbot-arena/
You can see that o1-mini, a silver (almost gold) model, is now a middle-of-the-road copper model.
Note that Chatbot Arena calculates its score relatively - they’ll show two outputs (without the model names), and people select the output they prefer. The preferences are ordered. Not sure what accounts for gold/silver/copper.
lol right
Yes. 7 months ago there weren’t any reasoning models. The video models were far worse. Coding was nothing compared to capabilities they have now.
Ai has come far fast from the time this article was written.
Testing shows that current models hallucinate more than previous ones. OpenAI rebeadged ChatGPT 5 to 4.5 because the gains were so meagre that they couldn’t get away with pretending it was a serious leap forward. “Reasoning” sucks; the model just leaps to a conclusion as usual then makes up steps that sound like they lead to that conclusion; in many cases the steps and the conclusion don’t match, and because the effect is achieved by running the model multiple times the cost is astronomical. So far just about every negative prediction in this article has come true, and every “hope for the future” has fizzled utterly.
Are there minor improvements in some areas? Yeah, sure. But you have to keep in mind the big picture that this article is painting; the economics of LLMs do not work if you’re getting incremental improvements at exponential costs. It was supposed to be the exact opposite; LLMs were pitched to investors as a “hyperscaling” technology that was going to rapidly accelerate in utility and capability until it hit escape velocity and became true AGI. Everything was supposed to get more, not less, efficient.
The current state of AI is not cost effective. Microsoft (just to pick on one example) is making somewhere in the region of a few tens of millions a year off of copilot (revenue, not profit), on an investment of tens of billions a year. That simply does not work. The only way for that to work is not only for the rate of progress to be accelerating, but for the rate of accelleration to be accelerating. We’re nowhere near close to that.
The crash is coming, not because LLMs cannot ever be improved, but because it’s becoming increasingly clear that there is no avenue for LLMs to be efficiently improved.
DeepSeek showed there is potential in abandoning the AGI pathway (which is impossible with LLMs) and instead training lots and lots of different specialized models that can be switched between for different tasks (at least, that’s how I understand it)
So I’m not going to assume LLMs will hit a wall, but it’s going to require something else paradigm shifting that we just aren’t seeing out of the current crop of developers.
That was pretty much always the only potential path forward for LLM type AIs. It’s an extension of the same machine learning technology we’ve been building up since the 50s.
Everyone trying to approximate an AGI with it has been wasting their time and money.
Yes, but the basic problem doesn’t change; you’re spending billions to make millions. And Deepseek’s approach only works because they’re able to essentially distill the output of less efficient models like Llama and GPT. So they haven’t actually solved the underlying technical issues, they’ve just found a way to break into the industry as a smaller player.
At the end of the day, the problem is not that you can’t ever make something useful with transformer models; it’s that you cannot make that useful thing in a way that is cost effective. That’s especially a problem if you expect big companies like Microsoft or OpenAI to continue to offer these services at an affordable price. Yes, Copilot can help you code, but that’s worth Jack shit if the only way for Microsoft to recoup their investment is by charging $200 a month for it.
ai has large initial cost, but older models will continue to exist and the open source models will continue to take potential profit from the corps
Amazon did not turn a profit for 14 years. That’s not a sign of a crash.
Ai is progressing and different routes are being tried. Some might not work as good as others. We are on a very fast train. I think the crash is unlikely. The prize is too valuable and it’s strategically impossible to leave it to someone else.
Amazon isn’t a good comparison. People need to buy things. Having a better way to do that was and is worth billions.
There is no revolutionary product that people need on the horizon for AI. The products released using it are mostly just fun toys, because it can’t be trusted with anything serious. There’s no indication this will change in the near to distant future
People don’t need to buy anything over Amazon. That’s not a need.
There is no revolutionary product on the horizon!?! I’m not sure how to respond to that.
you think It’s all a scam and everyone is in on it?
Assuming it cost Microsoft $0 dollars to provide their AI services (this is up there with "Assuming all of physics stops working), and every dollar they make from Copilot was pure profit, it would take Microsoft 384 years to recoup one year of investment in AI.
And thats without even getting into the fact that in reality these services are so expensive to run that every time a customer uses them its a net loss to the provider.
When Amazon started out, no one had heard of them. Everyone has heard of Microsoft. Everyone already uses Microsoft’s products. Everyone has heard about AI. It’s the only thing in tech that anyone is talking about. It’s hard to see how they could be doing more to market this. Same story with OpenAI, Facebook, Google, basically every player in this space.
Even if they can solve the efficiency problems to the point where they can actually make a profit off of these things, there just isn’t enough interest. AI does plenty of things that are useful, but nothing that’s truly vital, and it needs to be vital to have any hope of making back the money that’s gone into it.
At present, there simply is not a path to profitability that doesn’t rely on unicorn farts and pixie dust.
The companies developing ai don’t need to make a profit just the same as Amazon didn’t. They are in the development phase. Profit is not a big concern.
There aren’t any reasoning models now. LLMs cannot reason (and the whole “resoning” BS has just been busted by Apple), just like can’t orgasm, no matter what daddy Sam tells you.
I think you should try to be even more profane, good rhetorical strategy, well done.
Remember kids: when your brain fails to construct an argument - just tone-police!
Says the guy whose argument is an insult. 🤌
Did it?
LOL
nope, AI already kinda peaked what it can do currently.
3 years ago Sam Altman said the current models hit a wall and the media blocked it out
Don’t get my hopes up. I want them to lose as much of their dumb tech bro money as possible.
The problem is it will hurt everyone when they fail
Anyone relying on this shit deserves it. Let these venture capitalists throwing money at Ai all burn.
What I am saying is the investments are at a scale that it could cause a resession when these companies fail. Meaning its likely to effect everyone in the economy we all work in. You wont need to be working on AI to feel the impact
I agree with you about feeling no pity for the tech bros. However, a big appeal of AI for them is elimination of employees. And that’s going to hurt more regular folks who did not sign up for AI on a much more noticeable level. I dont think any nation is set up to handle the level of unemployment that’s on the horzon. So ignoring the environmental impacts of LLM/AI servers; let’s get national food, shelter, and healthcare systems in place, and then I’d be all for letting the venture capitalists shove their dicks in blenders.
yay!
Yes of course they are at the limit, and because they poisoned the internet with generative bullshit, they can’t scrape it and expect improvement, but they are still scraping it, so they’re poisoning themselves.
The end of the article has classic snake oil trash. The idea that newer AI could be trained to think similar to how humans think. Yes, great, you know scientists have been working on that for decades. Good luck succeeding where nobody else did. There’s a reason that so-called weak AI or so-called expert systems are the ones that we all remember as having lasted for decades.
the only people ever obsessed with AI, were corporate heads looking to reduce headcount in thier companies, and to suck up more VC money.
Right. And now AI has failed to deliver the promised miracles (as expected) for three years. So now it’s time to pick a new hype train to introduce the venture capitalists to.
All aboard!
i just went by a convention center, they are still hyping it up with tech conventions every few days, im in the west, so they concentrate all the tech hyping in the west.
It’s worse than that… They are broken. Like, they are all fucking broken.
Shh. Let it happen. Let the poison take hold.
I don’t think it’s just the poison, but an inherent limitation on the technology. An LLM is never going to be able to have critical thinking skills.
We’re going to need a new BS tech meme for arsehole investors to speculate in. Whats next? I’m guessing something medical. Personalized health care perhaps.
I think you’re right.
We’re due for something DNA based, since they can all grab a cheap copy of the 23andMe data set, now.
I got into AI in 2023 for a few months just to see if I could make sense of it all.
The whole thing as an industry is almost entirely smoke and mirrors meant to confuse, obscuring theft and fraud.
It does have some neat applications and opportunities, (generating templates; storyboarding) but warrants a small R&D team of enthusiasts, not the collective investment and resources of entire nations.
what you’re forgetting is: it makes all video blackmail tapes useless… or it’s getting closer to that…
at any rate, that’s the only way i can rationalize the amount of money going into it…
btw, did you know that when the FBI raided Epstein’s island, they forgot to get a warrant for his safe, which was full of hard drives and videos… after they got the warrant, the safe had been emptied….
from a secured fbi crime scene on an island…
i guess nobody talks about it because there’s not much else to the story….
well, except, how was the safe even excluded? if they’re searching the whole property, wouldn’t the safe be included?
What comes after that is the next google/amazon.