LOOK MAA I AM ON FRONT PAGE

  • @minoscopede@lemmy.world
    link
    fedilink
    English
    66
    edit-2
    4 days ago

    I see a lot of misunderstandings in the comments 🫤

    This is a pretty important finding for researchers, and it’s not obvious by any means. This finding is not showing a problem with LLMs’ abilities in general. The issue they discovered is specifically for so-called “reasoning models” that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

    Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that’s a flaw that needs to be corrected before models can actually reason.

    • @Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      214 days ago

      When given explicit instructions to follow models failed because they had not seen similar instructions before.

      This paper shows that there is no reasoning in LLMs at all, just extended pattern matching.

      • @MangoCats@feddit.it
        link
        fedilink
        English
        53 days ago

        I’m not trained or paid to reason, I am trained and paid to follow established corporate procedures. On rare occasions my input is sought to improve those procedures, but the vast majority of my time is spent executing tasks governed by a body of (not quite complete, sometimes conflicting) procedural instructions.

        If AI can execute those procedures as well as, or better than, human employees, I doubt employers will care if it is reasoning or not.

          • @MangoCats@feddit.it
            link
            fedilink
            English
            33 days ago

            Well - if you want to devolve into argument, you can argue all day long about “what is reasoning?”

            • @Knock_Knock_Lemmy_In@lemmy.world
              link
              fedilink
              English
              4
              edit-2
              3 days ago

              You were starting a new argument. Let’s stay on topic.

              The paper implies “Reasoning” is application of logic. It shows that LRMs are great at copying logic but can’t follow simple instructions that haven’t been seen before.

            • @technocrit@lemmy.dbzer0.com
              link
              fedilink
              English
              3
              edit-2
              3 days ago

              This would be a much better paper if it addressed that question in an honest way.

              Instead they just parrot the misleading terminology that they’re supposedly debunking.

              How dat collegial boys club undermines science…

    • @theherk@lemmy.world
      link
      fedilink
      English
      184 days ago

      Yeah these comments have the three hallmarks of Lemmy:

      • AI is just autocomplete mantras.
      • Apple is always synonymous with bad and dumb.
      • Rare pockets of really thoughtful comments.

      Thanks for being at least the latter.

    • @REDACTED@infosec.pub
      link
      fedilink
      English
      14
      edit-2
      4 days ago

      What confuses me is that we seemingly keep pushing away what counts as reasoning. Not too long ago, some smart alghoritms or a bunch of instructions for software (if/then) was officially, by definition, software/computer reasoning. Logically, CPUs do it all the time. Suddenly, when AI is doing that with pattern recognition, memory and even more advanced alghoritms, it’s no longer reasoning? I feel like at this point a more relevant question is “What exactly is reasoning?”. Before you answer, understand that most humans seemingly live by pattern recognition, not reasoning.

      https://en.wikipedia.org/wiki/Reasoning_system

      • @stickly@lemmy.world
        link
        fedilink
        English
        63 days ago

        If you want to boil down human reasoning to pattern recognition, the sheer amount of stimuli and associations built off of that input absolutely dwarfs anything an LLM will ever be able to handle. It’s like comparing PhD reasoning to a dog’s reasoning.

        While a dog can learn some interesting tricks and the smartest dogs can solve simple novel problems, there are hard limits. They simply lack a strong metacognition and the ability to make simple logical inferences (eg: why they fail at the shell game).

        Now we make that chasm even larger by cutting the stimuli to a fixed token limit. An LLM can do some clever tricks within that limit, but it’s designed to do exactly those tricks and nothing more. To get anything resembling human ability you would have to design something to match human complexity, and we don’t have the tech to make a synthetic human.

      • @technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        23 days ago

        Sure, these grifters are shady AF about their wacky definition of “reason”… But that’s just a continuation of the entire “AI” grift.

      • @MangoCats@feddit.it
        link
        fedilink
        English
        24 days ago

        I think as we approach the uncanny valley of machine intelligence, it’s no longer a cute cartoon but a menacing creepy not-quite imitation of ourselves.

    • @technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      6
      edit-2
      3 days ago

      There’s probably alot of misunderstanding because these grifters intentionally use misleading language: AI, reasoning, etc.

      If they stuck to scientifically descriptive terms, it would be much more clear and much less sensational.

    • @Zacryon@feddit.org
      link
      fedilink
      English
      94 days ago

      Some AI researchers found it obvious as well, in terms of they’ve suspected it and had some indications. But it’s good to see more data on this to affirm this assessment.

      • @kreskin@lemmy.world
        link
        fedilink
        English
        3
        edit-2
        4 days ago

        Lots of us who has done some time in search and relevancy early on knew ML was always largely breathless overhyped marketing. It was endless buzzwords and misframing from the start, but it raised our salaries. Anything that exec doesnt understand is profitable and worth doing.

        • @Zacryon@feddit.org
          link
          fedilink
          English
          33 days ago

          Ragebait?

          I’m in robotics and find plenty of use for ML methods. Think of image classifiers, how do you want to approach that without oversimplified problem settings?
          Or even in control or coordination problems, which can sometimes become NP-hard. Even though not optimal, ML methods are quite solid in learning patterns of highly dimensional NP hard problem settings, often outperforming hand-crafted conventional suboptimal solvers in computation effort vs solution quality analysis, especially outperforming (asymptotically) optimal solvers time-wise, even though not with optimal solutions (but “good enough” nevertheless). (Ok to be fair suboptimal solvers do that as well, but since ML methods can outperform these, I see it as an attractive middle-ground.)

      • @jj4211@lemmy.world
        link
        fedilink
        English
        23 days ago

        Particularly to counter some more baseless marketing assertions about the nature of the technology.

    • AbuTahirOP
      link
      fedilink
      English
      43 days ago

      Cognitive scientist Douglas Hofstadter (1979) showed reasoning emerges from pattern recognition and analogy-making - abilities that modern AI demonstrably possesses. The question isn’t if AI can reason, but how its reasoning differs from ours.

    • @Tobberone@lemm.ee
      link
      fedilink
      English
      54 days ago

      What statistical method do you base that claim on? The results presented match expectations given that Markov chains are still the basis of inference. What magic juice is added to “reasoning models” that allow them to break free of the inherent boundaries of the statistical methods they are based on?

      • @minoscopede@lemmy.world
        link
        fedilink
        English
        3
        edit-2
        3 days ago

        I’d encourage you to research more about this space and learn more.

        As it is, the statement “Markov chains are still the basis of inference” doesn’t make sense, because markov chains are a separate thing. You might be thinking of Markov decision processes, which is used in training RL agents, but that’s also unrelated because these models are not RL agents, they’re supervised learning agents. And even if they were RL agents, the MDP describes the training environment, not the model itself, so it’s not really used for inference.

        I mean this just as an invitation to learn more, and not pushback for raising concerns. Many in the research community would be more than happy to welcome you into it. The world needs more people who are skeptical of AI doing research in this field.

        • @Tobberone@lemm.ee
          link
          fedilink
          English
          12 days ago

          Which method, then, is the inference built upon, if not the embeddings? And the question still stands, how does “AI” escape the inherent limits of statistical inference?