• @halcyoncmdr@lemmy.world
          link
          fedilink
          English
          3813 days ago

          [Citation needed]

          If anything the LLMs have gotten less useful and started hallucinating even more obviously now.

        • NoiseColor
          link
          fedilink
          1313 days ago

          Yes. 7 months ago there weren’t any reasoning models. The video models were far worse. Coding was nothing compared to capabilities they have now.

          Ai has come far fast from the time this article was written.

          • @Voroxpete@sh.itjust.works
            link
            fedilink
            2112 days ago

            Testing shows that current models hallucinate more than previous ones. OpenAI rebeadged ChatGPT 5 to 4.5 because the gains were so meagre that they couldn’t get away with pretending it was a serious leap forward. “Reasoning” sucks; the model just leaps to a conclusion as usual then makes up steps that sound like they lead to that conclusion; in many cases the steps and the conclusion don’t match, and because the effect is achieved by running the model multiple times the cost is astronomical. So far just about every negative prediction in this article has come true, and every “hope for the future” has fizzled utterly.

            Are there minor improvements in some areas? Yeah, sure. But you have to keep in mind the big picture that this article is painting; the economics of LLMs do not work if you’re getting incremental improvements at exponential costs. It was supposed to be the exact opposite; LLMs were pitched to investors as a “hyperscaling” technology that was going to rapidly accelerate in utility and capability until it hit escape velocity and became true AGI. Everything was supposed to get more, not less, efficient.

            The current state of AI is not cost effective. Microsoft (just to pick on one example) is making somewhere in the region of a few tens of millions a year off of copilot (revenue, not profit), on an investment of tens of billions a year. That simply does not work. The only way for that to work is not only for the rate of progress to be accelerating, but for the rate of accelleration to be accelerating. We’re nowhere near close to that.

            The crash is coming, not because LLMs cannot ever be improved, but because it’s becoming increasingly clear that there is no avenue for LLMs to be efficiently improved.

            • queermunist she/her
              link
              fedilink
              7
              edit-2
              12 days ago

              DeepSeek showed there is potential in abandoning the AGI pathway (which is impossible with LLMs) and instead training lots and lots of different specialized models that can be switched between for different tasks (at least, that’s how I understand it)

              So I’m not going to assume LLMs will hit a wall, but it’s going to require something else paradigm shifting that we just aren’t seeing out of the current crop of developers.

              • skulblaka
                link
                fedilink
                812 days ago

                That was pretty much always the only potential path forward for LLM type AIs. It’s an extension of the same machine learning technology we’ve been building up since the 50s.

                Everyone trying to approximate an AGI with it has been wasting their time and money.

              • @Voroxpete@sh.itjust.works
                link
                fedilink
                712 days ago

                Yes, but the basic problem doesn’t change; you’re spending billions to make millions. And Deepseek’s approach only works because they’re able to essentially distill the output of less efficient models like Llama and GPT. So they haven’t actually solved the underlying technical issues, they’ve just found a way to break into the industry as a smaller player.

                At the end of the day, the problem is not that you can’t ever make something useful with transformer models; it’s that you cannot make that useful thing in a way that is cost effective. That’s especially a problem if you expect big companies like Microsoft or OpenAI to continue to offer these services at an affordable price. Yes, Copilot can help you code, but that’s worth Jack shit if the only way for Microsoft to recoup their investment is by charging $200 a month for it.

            • NoiseColor
              link
              fedilink
              512 days ago

              Amazon did not turn a profit for 14 years. That’s not a sign of a crash.

              Ai is progressing and different routes are being tried. Some might not work as good as others. We are on a very fast train. I think the crash is unlikely. The prize is too valuable and it’s strategically impossible to leave it to someone else.

              • Glifted
                link
                fedilink
                1312 days ago

                Amazon isn’t a good comparison. People need to buy things. Having a better way to do that was and is worth billions.

                There is no revolutionary product that people need on the horizon for AI. The products released using it are mostly just fun toys, because it can’t be trusted with anything serious. There’s no indication this will change in the near to distant future

                • NoiseColor
                  link
                  fedilink
                  212 days ago

                  People don’t need to buy anything over Amazon. That’s not a need.

                  There is no revolutionary product on the horizon!?! I’m not sure how to respond to that.

                  you think It’s all a scam and everyone is in on it?

              • @Voroxpete@sh.itjust.works
                link
                fedilink
                612 days ago

                Assuming it cost Microsoft $0 dollars to provide their AI services (this is up there with "Assuming all of physics stops working), and every dollar they make from Copilot was pure profit, it would take Microsoft 384 years to recoup one year of investment in AI.

                And thats without even getting into the fact that in reality these services are so expensive to run that every time a customer uses them its a net loss to the provider.

                When Amazon started out, no one had heard of them. Everyone has heard of Microsoft. Everyone already uses Microsoft’s products. Everyone has heard about AI. It’s the only thing in tech that anyone is talking about. It’s hard to see how they could be doing more to market this. Same story with OpenAI, Facebook, Google, basically every player in this space.

                Even if they can solve the efficiency problems to the point where they can actually make a profit off of these things, there just isn’t enough interest. AI does plenty of things that are useful, but nothing that’s truly vital, and it needs to be vital to have any hope of making back the money that’s gone into it.

                At present, there simply is not a path to profitability that doesn’t rely on unicorn farts and pixie dust.

                • NoiseColor
                  link
                  fedilink
                  112 days ago

                  The companies developing ai don’t need to make a profit just the same as Amazon didn’t. They are in the development phase. Profit is not a big concern.

          • @MrSmith@lemmy.world
            link
            fedilink
            210 days ago

            There aren’t any reasoning models now. LLMs cannot reason (and the whole “resoning” BS has just been busted by Apple), just like can’t orgasm, no matter what daddy Sam tells you.

    • Glifted
      link
      fedilink
      212 days ago

      The problem is it will hurt everyone when they fail

        • Glifted
          link
          fedilink
          212 days ago

          What I am saying is the investments are at a scale that it could cause a resession when these companies fail. Meaning its likely to effect everyone in the economy we all work in. You wont need to be working on AI to feel the impact

        • @LycanGalen@lemmy.world
          link
          fedilink
          112 days ago

          I agree with you about feeling no pity for the tech bros. However, a big appeal of AI for them is elimination of employees. And that’s going to hurt more regular folks who did not sign up for AI on a much more noticeable level. I dont think any nation is set up to handle the level of unemployment that’s on the horzon. So ignoring the environmental impacts of LLM/AI servers; let’s get national food, shelter, and healthcare systems in place, and then I’d be all for letting the venture capitalists shove their dicks in blenders.

  • @fodor@lemmy.zip
    link
    fedilink
    1813 days ago

    Yes of course they are at the limit, and because they poisoned the internet with generative bullshit, they can’t scrape it and expect improvement, but they are still scraping it, so they’re poisoning themselves.

    The end of the article has classic snake oil trash. The idea that newer AI could be trained to think similar to how humans think. Yes, great, you know scientists have been working on that for decades. Good luck succeeding where nobody else did. There’s a reason that so-called weak AI or so-called expert systems are the ones that we all remember as having lasted for decades.

  • @Ledericas@lemm.ee
    link
    fedilink
    English
    1612 days ago

    the only people ever obsessed with AI, were corporate heads looking to reduce headcount in thier companies, and to suck up more VC money.

    • @pinball_wizard@lemmy.zip
      link
      fedilink
      211 days ago

      Right. And now AI has failed to deliver the promised miracles (as expected) for three years. So now it’s time to pick a new hype train to introduce the venture capitalists to.

      All aboard!

      • @Ledericas@lemm.ee
        link
        fedilink
        English
        211 days ago

        i just went by a convention center, they are still hyping it up with tech conventions every few days, im in the west, so they concentrate all the tech hyping in the west.

    • Norah (pup/it/she)
      cake
      link
      fedilink
      English
      2613 days ago

      I don’t think it’s just the poison, but an inherent limitation on the technology. An LLM is never going to be able to have critical thinking skills.

  • @kreskin@lemmy.world
    link
    fedilink
    811 days ago

    We’re going to need a new BS tech meme for arsehole investors to speculate in. Whats next? I’m guessing something medical. Personalized health care perhaps.

    • @pinball_wizard@lemmy.zip
      link
      fedilink
      611 days ago

      I think you’re right.

      We’re due for something DNA based, since they can all grab a cheap copy of the 23andMe data set, now.

  • I got into AI in 2023 for a few months just to see if I could make sense of it all.

    The whole thing as an industry is almost entirely smoke and mirrors meant to confuse, obscuring theft and fraud.

    It does have some neat applications and opportunities, (generating templates; storyboarding) but warrants a small R&D team of enthusiasts, not the collective investment and resources of entire nations.

    • @xor@lemmy.dbzer0.com
      link
      fedilink
      English
      311 days ago

      what you’re forgetting is: it makes all video blackmail tapes useless… or it’s getting closer to that…
      at any rate, that’s the only way i can rationalize the amount of money going into it…
      btw, did you know that when the FBI raided Epstein’s island, they forgot to get a warrant for his safe, which was full of hard drives and videos… after they got the warrant, the safe had been emptied….
      from a secured fbi crime scene on an island…
      i guess nobody talks about it because there’s not much else to the story….
      well, except, how was the safe even excluded? if they’re searching the whole property, wouldn’t the safe be included?