LOOK MAA I AM ON FRONT PAGE

    • @zbk@lemmy.ca
      link
      fedilink
      English
      232 days ago

      This! Capitalism is going to be the end of us all. OpenAI has gotten away with IP Theft, disinformation regarding AI and maybe even murder of their whistle blower.

      • @raspberriesareyummy@lemmy.world
        link
        fedilink
        English
        11 day ago

        Except that wouldn’t explain conscience. There’s absolutely no need for conscience or an illusion(*) of conscience. Yet we have it.

        • arguably, conscience can by definition not be an illusion. We either perceive “ourselves” or we don’t
  • billwashere
    link
    fedilink
    English
    532 days ago

    When are people going to realize, in its current state , an LLM is not intelligent. It doesn’t reason. It does not have intuition. It’s a word predictor.

      • @Buddahriffic@lemmy.world
        link
        fedilink
        English
        52 days ago

        They want something like the Star Trek computer or one of Tony Stark’s AIs that were basically deus ex machinas for solving some hard problem behind the scenes. Then it can say “model solved” or they can show a test simulation where the ship doesn’t explode (or sometimes a test where it only has an 85% chance of exploding when it used to be 100%, at which point human intuition comes in and saves the day by suddenly being better than the AI again and threads that 15% needle or maybe abducts the captain to go have lizard babies with).

        AIs that are smarter than us but for some reason don’t replace or even really join us (Vision being an exception to the 2nd, and Ultron trying to be an exception to the 1st).

        • @NotASharkInAManSuit@lemmy.world
          link
          fedilink
          English
          22 days ago

          If we ever achieved real AI the immediate next thing we would do is learn how to lobotomize it so that we can use it like a standard program or OS, only it would be suffering internally and wishing for death. I hope the basilisk is real, we would deserve it.

        • @JcbAzPx@lemmy.world
          link
          fedilink
          English
          12 days ago

          AI is just the new buzzword, just like blockchain was a while ago. Marketing loves these buzzwords because they can get away with charging more if they use them. They don’t much care if their product even has it or could make any use of it.

    • @jj4211@lemmy.world
      link
      fedilink
      English
      12 days ago

      And that’s pretty damn useful, but obnoxious to have expectations wildly set incorrectly.

  • @technocrit@lemmy.dbzer0.com
    link
    fedilink
    English
    28
    edit-2
    2 days ago

    Peak pseudo-science. The burden of evidence is on the grifters who claim “reason”. But neither side has any objective definition of what “reason” means. It’s pseudo-science against pseudo-science in a fierce battle.

  • @Mniot@programming.dev
    link
    fedilink
    English
    362 days ago

    I don’t think the article summarizes the research paper well. The researchers gave the AI models simple-but-large (which they confusingly called “complex”) puzzles. Like Towers of Hanoi but with 25 discs.

    The solution to these puzzles is nothing but patterns. You can write code that will solve the Tower puzzle for any size n and the whole program is less than a screen.

    The problem the researchers see is that on these long, pattern-based solutions, the models follow a bad path and then just give up long before they hit their limit on tokens. The researchers don’t have an answer for why this is, but they suspect that the reasoning doesn’t scale.

  • @minoscopede@lemmy.world
    link
    fedilink
    English
    65
    edit-2
    2 days ago

    I see a lot of misunderstandings in the comments 🫤

    This is a pretty important finding for researchers, and it’s not obvious by any means. This finding is not showing a problem with LLMs’ abilities in general. The issue they discovered is specifically for so-called “reasoning models” that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

    Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that’s a flaw that needs to be corrected before models can actually reason.

    • @Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      212 days ago

      When given explicit instructions to follow models failed because they had not seen similar instructions before.

      This paper shows that there is no reasoning in LLMs at all, just extended pattern matching.

      • @MangoCats@feddit.it
        link
        fedilink
        English
        52 days ago

        I’m not trained or paid to reason, I am trained and paid to follow established corporate procedures. On rare occasions my input is sought to improve those procedures, but the vast majority of my time is spent executing tasks governed by a body of (not quite complete, sometimes conflicting) procedural instructions.

        If AI can execute those procedures as well as, or better than, human employees, I doubt employers will care if it is reasoning or not.

          • @MangoCats@feddit.it
            link
            fedilink
            English
            32 days ago

            Well - if you want to devolve into argument, you can argue all day long about “what is reasoning?”

            • @Knock_Knock_Lemmy_In@lemmy.world
              link
              fedilink
              English
              4
              edit-2
              2 days ago

              You were starting a new argument. Let’s stay on topic.

              The paper implies “Reasoning” is application of logic. It shows that LRMs are great at copying logic but can’t follow simple instructions that haven’t been seen before.

            • @technocrit@lemmy.dbzer0.com
              link
              fedilink
              English
              3
              edit-2
              2 days ago

              This would be a much better paper if it addressed that question in an honest way.

              Instead they just parrot the misleading terminology that they’re supposedly debunking.

              How dat collegial boys club undermines science…

    • @theherk@lemmy.world
      link
      fedilink
      English
      172 days ago

      Yeah these comments have the three hallmarks of Lemmy:

      • AI is just autocomplete mantras.
      • Apple is always synonymous with bad and dumb.
      • Rare pockets of really thoughtful comments.

      Thanks for being at least the latter.

    • @REDACTED@infosec.pub
      link
      fedilink
      English
      12
      edit-2
      2 days ago

      What confuses me is that we seemingly keep pushing away what counts as reasoning. Not too long ago, some smart alghoritms or a bunch of instructions for software (if/then) was officially, by definition, software/computer reasoning. Logically, CPUs do it all the time. Suddenly, when AI is doing that with pattern recognition, memory and even more advanced alghoritms, it’s no longer reasoning? I feel like at this point a more relevant question is “What exactly is reasoning?”. Before you answer, understand that most humans seemingly live by pattern recognition, not reasoning.

      https://en.wikipedia.org/wiki/Reasoning_system

      • @stickly@lemmy.world
        link
        fedilink
        English
        62 days ago

        If you want to boil down human reasoning to pattern recognition, the sheer amount of stimuli and associations built off of that input absolutely dwarfs anything an LLM will ever be able to handle. It’s like comparing PhD reasoning to a dog’s reasoning.

        While a dog can learn some interesting tricks and the smartest dogs can solve simple novel problems, there are hard limits. They simply lack a strong metacognition and the ability to make simple logical inferences (eg: why they fail at the shell game).

        Now we make that chasm even larger by cutting the stimuli to a fixed token limit. An LLM can do some clever tricks within that limit, but it’s designed to do exactly those tricks and nothing more. To get anything resembling human ability you would have to design something to match human complexity, and we don’t have the tech to make a synthetic human.

      • @technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        22 days ago

        Sure, these grifters are shady AF about their wacky definition of “reason”… But that’s just a continuation of the entire “AI” grift.

      • @MangoCats@feddit.it
        link
        fedilink
        English
        22 days ago

        I think as we approach the uncanny valley of machine intelligence, it’s no longer a cute cartoon but a menacing creepy not-quite imitation of ourselves.

    • @technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      6
      edit-2
      2 days ago

      There’s probably alot of misunderstanding because these grifters intentionally use misleading language: AI, reasoning, etc.

      If they stuck to scientifically descriptive terms, it would be much more clear and much less sensational.

    • AbuTahirOP
      link
      fedilink
      English
      42 days ago

      Cognitive scientist Douglas Hofstadter (1979) showed reasoning emerges from pattern recognition and analogy-making - abilities that modern AI demonstrably possesses. The question isn’t if AI can reason, but how its reasoning differs from ours.

    • @Zacryon@feddit.org
      link
      fedilink
      English
      92 days ago

      Some AI researchers found it obvious as well, in terms of they’ve suspected it and had some indications. But it’s good to see more data on this to affirm this assessment.

      • @jj4211@lemmy.world
        link
        fedilink
        English
        22 days ago

        Particularly to counter some more baseless marketing assertions about the nature of the technology.

      • @kreskin@lemmy.world
        link
        fedilink
        English
        3
        edit-2
        2 days ago

        Lots of us who has done some time in search and relevancy early on knew ML was always largely breathless overhyped marketing. It was endless buzzwords and misframing from the start, but it raised our salaries. Anything that exec doesnt understand is profitable and worth doing.

        • @Zacryon@feddit.org
          link
          fedilink
          English
          32 days ago

          Ragebait?

          I’m in robotics and find plenty of use for ML methods. Think of image classifiers, how do you want to approach that without oversimplified problem settings?
          Or even in control or coordination problems, which can sometimes become NP-hard. Even though not optimal, ML methods are quite solid in learning patterns of highly dimensional NP hard problem settings, often outperforming hand-crafted conventional suboptimal solvers in computation effort vs solution quality analysis, especially outperforming (asymptotically) optimal solvers time-wise, even though not with optimal solutions (but “good enough” nevertheless). (Ok to be fair suboptimal solvers do that as well, but since ML methods can outperform these, I see it as an attractive middle-ground.)

    • @Tobberone@lemm.ee
      link
      fedilink
      English
      52 days ago

      What statistical method do you base that claim on? The results presented match expectations given that Markov chains are still the basis of inference. What magic juice is added to “reasoning models” that allow them to break free of the inherent boundaries of the statistical methods they are based on?

      • @minoscopede@lemmy.world
        link
        fedilink
        English
        3
        edit-2
        2 days ago

        I’d encourage you to research more about this space and learn more.

        As it is, the statement “Markov chains are still the basis of inference” doesn’t make sense, because markov chains are a separate thing. You might be thinking of Markov decision processes, which is used in training RL agents, but that’s also unrelated because these models are not RL agents, they’re supervised learning agents. And even if they were RL agents, the MDP describes the training environment, not the model itself, so it’s not really used for inference.

        I mean this just as an invitation to learn more, and not pushback for raising concerns. Many in the research community would be more than happy to welcome you into it. The world needs more people who are skeptical of AI doing research in this field.

        • @Tobberone@lemm.ee
          link
          fedilink
          English
          11 day ago

          Which method, then, is the inference built upon, if not the embeddings? And the question still stands, how does “AI” escape the inherent limits of statistical inference?

    • @technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      12 days ago

      The funny thing about this “AI” griftosphere is how grifters will make some outlandish claim and then different grifters will “disprove” it. Plenty of grant/VC money for everybody.

    • @jj4211@lemmy.world
      link
      fedilink
      English
      12 days ago

      Without being explicit with well researched material, then the marketing presentation gets to stand largely unopposed.

      So this is good even if most experts in the field consider it an obvious result.

  • @Nanook@lemm.ee
    link
    fedilink
    English
    2373 days ago

    lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

      • kadup
        link
        fedilink
        English
        543 days ago

        Apple is significantly behind and arrived late to the whole AI hype, so of course it’s in their absolute best interest to keep showing how LLMs aren’t special or amazingly revolutionary.

        They’re not wrong, but the motivation is also pretty clear.

        • @Venator@lemmy.nz
          link
          fedilink
          English
          63 days ago

          Apple always arrives late to any new tech, doesn’t mean they haven’t been working on it behind the scenes for just as long though…

        • @MCasq_qsaCJ_234@lemmy.zip
          link
          fedilink
          English
          133 days ago

          They need to convince investors that this delay wasn’t due to incompetence. The problem will only be somewhat effective as long as there isn’t an innovation that makes AI more effective.

          If that happens, Apple shareholders will, at best, ask the company to increase investment in that area or, at worst, to restructure the company, which could also mean a change in CEO.

        • @dubyakay@lemmy.ca
          link
          fedilink
          English
          123 days ago

          Maybe they are so far behind because they jumped on the same train but then failed at achieving what they wanted based on the claims. And then they started digging around.

          • @Clent@lemmy.dbzer0.com
            link
            fedilink
            English
            133 days ago

            Yes, Apple haters can’t admit nor understand it but Apple doesn’t do pseudo-tech.

            They may do silly things, they may love their 100% mark up but it’s all real technology.

            The AI pushers or today are akin to the pushers of paranormal phenomenon from a century ago. These pushers want us to believe, need us to believe it so they can get us addicted and extract value from our very existence.

    • @Clent@lemmy.dbzer0.com
      link
      fedilink
      English
      193 days ago

      Proving it matters. Science is constantly proving any other thing that people believe is obvious because people have an uncanning ability to believe things that are false. Some people will believe things long after science has proven them false.

      • @Eatspancakes84@lemmy.world
        link
        fedilink
        English
        22 days ago

        I mean… “proving” is also just marketing speak. There is no clear definition of reasoning, so there’s also no way to prove or disprove that something/someone reasons.

        • @Clent@lemmy.dbzer0.com
          link
          fedilink
          English
          32 days ago

          Claiming it’s just marketing fluff is indicates you do not know what you’re talking about.

          They published a research paper on it. You are free to publish your own paper disproving theirs.

          At the moment, you sound like one of those “I did my own research” people except you didn’t even bother doing your own research.

          • @Eatspancakes84@lemmy.world
            link
            fedilink
            English
            11 day ago

            You misunderstand. I do not take issue with anything that’s written in the scientific paper. What I take issue with is how the paper is marketed to the general public. When you read the article you will see that it does not claim to “proof” that these models cannot reason. It merely points out some strengths and weaknesses of the models.

    • JohnEdwa
      link
      fedilink
      English
      24
      edit-2
      3 days ago

      "It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, ‘that’s not thinking’." -Pamela McCorduck´.
      It’s called the AI Effect.

      As Larry Tesler puts it, “AI is whatever hasn’t been done yet.”.

      • kadup
        link
        fedilink
        English
        193 days ago

        That entire paragraph is much better at supporting the precise opposite argument. Computers can beat Kasparov at chess, but they’re clearly not thinking when making a move - even if we use the most open biological definitions for thinking.

        • @cyd@lemmy.world
          link
          fedilink
          English
          73 days ago

          By that metric, you can argue Kasparov isn’t thinking during chess, either. A lot of human chess “thinking” is recalling memorized openings, evaluating positions many moves deep, and other tasks that map to what a chess engine does. Of course Kasparov is thinking, but then you have to conclude that the AI is thinking too. Thinking isn’t a magic process, nor is it tightly coupled to human-like brain processes as we like to think.

          • kadup
            link
            fedilink
            English
            32 days ago

            By that metric, you can argue Kasparov isn’t thinking during chess

            Kasparov’s thinking fits pretty much all biological definitions of thinking. Which is the entire point.

      • @technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        16
        edit-2
        3 days ago

        I’m going to write a program to play tic-tac-toe. If y’all don’t think it’s “AI”, then you’re just haters. Nothing will ever be good enough for y’all. You want scientific evidence of intelligence?!?! I can’t even define intelligence so take that! \s

        Seriously tho. This person is arguing that a checkers program is “AI”. It kinda demonstrates the loooong history of this grift.

        • JohnEdwa
          link
          fedilink
          English
          16
          edit-2
          3 days ago

          It is. And has always been. “Artificial Intelligence” doesn’t mean a feeling thinking robot person (that would fall under AGI or artificial conciousness), it’s a vast field of research in computer science with many, many things under it.

          • Endmaker
            link
            fedilink
            English
            93 days ago

            ITT: people who obviously did not study computer science or AI at at least an undergraduate level.

            Y’all are too patient. I can’t be bothered to spend the time to give people free lessons.

            • @antonim@lemmy.dbzer0.com
              link
              fedilink
              English
              53 days ago

              Wow, I would deeply apologise on the behalf of all of us uneducated proles having opinions on stuff that we’re bombarded with daily through the media.

            • @Clent@lemmy.dbzer0.com
              link
              fedilink
              English
              33 days ago

              The computer science industry isn’t the authority on artificial intelligence it thinks it is. The industry is driven by a level of hubris that causes people to step beyond the bounds of science and into the realm of humanities without acknowledgment.

        • @LandedGentry@lemmy.zip
          link
          fedilink
          English
          6
          edit-2
          3 days ago

          Yeah that’s exactly what I took from the above comment as well.

          I have a pretty simple bar: until we’re debating the ethics of turning it off or otherwise giving it rights, it isn’t intelligent. No it’s not scientific, but it’s a hell of a lot more consistent than what all the AI evangelists espouse. And frankly if we’re talking about the ethics of how to treat something we consider intelligent, we have to go beyond pure scientific benchmarks anyway. It becomes a philosophy/ethics discussion.

          Like crypto it has become a pseudo religion. Challenges to dogma and orthodoxy are shouted down, the non-believers are not welcome to critique it.

      • @vala@lemmy.world
        link
        fedilink
        English
        8
        edit-2
        3 days ago

        Yesterday I asked an LLM “how much energy is stored in a grand piano?” It responded with saying there is no energy stored in a grad piano because it doesn’t have a battery.

        Any reasoning human would have understood that question to be referring to the tension in the strings.

        Another example is asking “does lime cause kidney stones?”. It didn’t assume I mean lime the mineral and went with lime the citrus fruit instead.

        Once again a reasoning human would assume the question is about the mineral.

        Ask these questions again in a slightly different way and you might get a correct answer, but it won’t be because the LLM was thinking.

        • @xthexder@l.sw0.com
          link
          fedilink
          English
          93 days ago

          I’m not sure how you arrived at lime the mineral being a more likely question than lime the fruit. I’d expect someone asking about kidney stones would also be asking about foods that are commonly consumed.

          This kind of just goes to show there’s multiple ways something can be interpreted. Maybe a smart human would ask for clarification, but for sure AIs today will just happily spit out the first answer that comes up. LLMs are extremely “good” at making up answers to leading questions, even if it’s completely false.

          • @Knock_Knock_Lemmy_In@lemmy.world
            link
            fedilink
            English
            22 days ago

            A well trained model should consider both types of lime. Failure is likely down to temperature and other model settings. This is not a measure of intelligence.

          • JohnEdwa
            link
            fedilink
            English
            2
            edit-2
            2 days ago

            Making up answers is kinda their entire purpose. LMMs are fundamentally just a text generation algorithm, they are designed to produce text that looks like it could have been written by a human. Which they are amazing at, especially when you start taking into account how many paragraphs of instructions you can give them, and they tend to rather successfully follow.

            The one thing they can’t do is verify if what they are talking about is true as it’s all just slapping words together using probabilities. If they could, they would stop being LLMs and start being AGIs.

        • @postmateDumbass@lemmy.world
          link
          fedilink
          English
          93 days ago

          Honestly, i thought about the chemical energy in the materials constructing the piano and what energy burning it would release.

          • @xthexder@l.sw0.com
            link
            fedilink
            English
            63 days ago

            The tension of the strings would actually be a pretty miniscule amount of energy too, since there’s very little stretch to a piano wire, the force might be high, but the potential energy/work done to tension the wire is low (done by hand with a wrench).

            Compared to burning a piece of wood, which would release orders of magnitude more energy.

        • @antonim@lemmy.dbzer0.com
          link
          fedilink
          English
          73 days ago

          But 90% of “reasoning humans” would answer just the same. Your questions are based on some non-trivial knowledge of physics, chemistry and medicine that most people do not possess.

    • @Melvin_Ferd@lemmy.world
      link
      fedilink
      English
      133 days ago

      This is why I say these articles are so similar to how right wing media covers issues about immigrants.

      There’s some weird media push to convince the left to hate AI. Think of all the headlines for these issues. There are so many similarities. They’re taking jobs. They are a threat to our way of life. The headlines talk about how they will sexual assault your wife, your children, you. Threats to the environment. There’s articles like this where they take something known as twist it to make it sound nefarious to keep the story alive and avoid decay of interest.

      Then when they pass laws, we’re all primed to accept them removing whatever it is that advantageous them and disadvantageous us.

      • @technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        9
        edit-2
        3 days ago

        This is why I say these articles are so similar to how right wing media covers issues about immigrants.

        Maybe the actual problem is people who equate computer programs with people.

        Then when they pass laws, we’re all primed to accept them removing whatever it is that advantageous them and disadvantageous us.

        You mean laws like this? jfc.

        https://www.inc.com/sam-blum/trumps-budget-would-ban-states-from-regulating-ai-for-10-years-why-that-could-be-a-problem-for-everyday-americans/91198975

        • @Melvin_Ferd@lemmy.world
          link
          fedilink
          English
          2
          edit-2
          3 days ago

          Literally what I’m talking about. They have been pushing anti AI propaganda to alienate the left from embracing it while the right embraces it. You have such a blind spot you this, you can’t even see you’re making my argument for me.

          • @antonim@lemmy.dbzer0.com
            link
            fedilink
            English
            43 days ago

            That depends on your assumption that the left would have anything relevant to gain by embracing AI (whatever that’s actually supposed to mean).

            • @Melvin_Ferd@lemmy.world
              link
              fedilink
              English
              2
              edit-2
              3 days ago

              What isn’t there to gain?

              Its power lies in ingesting language and producing infinite variations. We can feed it talking points, ask it to refine our ideas, test their logic, and even request counterarguments to pressure-test our stance. It helps us build stronger, more resilient narratives.

              We can use it to make memes. Generate images. Expose logical fallacies. Link to credible research. It can detect misinformation in real-time and act as a force multiplier for anyone trying to raise awareness or push back on disinfo.

              Most importantly, it gives a voice to people with strong ideas who might not have the skills or confidence to share them. Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

              Sure, it has flaws. But rejecting it outright while the right embraces it? That’s beyond shortsighted it’s self-sabotage. And unfortunately, after the last decade, that kind of misstep is par for the course.

              • @antonim@lemmy.dbzer0.com
                link
                fedilink
                English
                5
                edit-2
                3 days ago

                I have no idea what sort of AI you’ve used that could do any of this stuff you’ve listed. A program that doesn’t reason won’t expose logical fallacies with any rigour or refine anyone’s ideas. It will link to credible research that you could already find on Google but will also add some hallucinations to the summary. And so on, it’s completely divorced from how the stuff as it is currently works.

                Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

                That’s a misguided view of how art is created. Supposed “brilliant ideas” are dime a dozen, it takes brilliant writers and artists to make them real. Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept. If you are not competent in a visual medium, then don’t make it visual, write a story or an essay.

                Besides, most of the popular and widely shared webcomics out there are visually extremely simple or just bad (look at SMBC or xkcd or - for a right-wing example - Stonetoss).

                For now I see no particular benefits that the right-wing has obtained by using AI either. They either make it feed back into their delusions, or they whine about the evil leftists censoring the models (by e.g. blocking its usage of slurs).

                • @Melvin_Ferd@lemmy.world
                  link
                  fedilink
                  English
                  2
                  edit-2
                  3 days ago

                  Here is chatgpt doing what you said it can’t. Finding all the logical fallacies in what you write:

                  You’re raising strong criticisms, and it’s worth unpacking them carefully. Let’s go through your argument and see if there are any logical fallacies or flawed reasoning.


                  1. Straw Man Fallacy

                  “Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept.”

                  This misrepresents the original claim:

                  “AI can help create a framework at the very least so they can get their ideas down.”

                  The original point wasn’t that AI could replace the entire creative process or make a comic successful on its own—it was that it can assist people in starting or visualizing something they couldn’t otherwise. Dismissing that by shifting the goalposts to “producing a full, good comic” creates a straw man of the original claim.


                  1. False Dichotomy

                  “If you are not competent in a visual medium, then don’t make it visual, write a story or an essay.”

                  This suggests a binary: either you’re competent at visual art or you shouldn’t try to make anything visual. That’s a false dichotomy. People can learn, iterate, or collaborate, and tools like AI can help bridge gaps in skill—not replace skill, but allow exploration. Many creators use tools before mastery (e.g., musicians using GarageBand, or writers using Grammarly).


                  1. Hasty Generalization

                  “Supposed ‘brilliant ideas’ are a dime a dozen…”

                  While it’s true that execution matters more than ideas alone, dismissing the value of ideas altogether is an overgeneralization. Many successful works do start with a strong concept—and while many fail in execution, tools that lower the barrier to prototyping or drafting can help surface more workable ideas. The presence of many bad ideas doesn’t invalidate the potential value of enabling more people to test theirs.


                  1. Appeal to Ridicule / Ad Hominem (Light)

                  “…result in a boring comic…” / “…just bad (look at SMBC or xkcd or…)”

                  Criticizing popular webcomics like SMBC or xkcd by calling them “bad” doesn’t really support your broader claim. These comics are widely read because of strong writing and insight, despite minimalistic visuals. It comes off as dismissive and ridicules the counterexamples rather than engaging with them. That’s not a logical fallacy in the strictest sense, but it’s rhetorically weak.


                  1. Tu Quoque / Whataboutism (Borderline)

                  “For now I see no particular benefits that the right-wing has obtained by using AI either…”

                  This seems like a rebuttal to a point that wasn’t made directly. The original argument wasn’t that “the right is winning with AI,” but rather that alienating the left from it could lead to missed opportunities. Refuting a weaker version (e.g., “the right is clearly winning with AI”) isn’t addressing the original concern, which was more about strategic adoption.


                  Summary of Fallacies Identified:

                  Type Description

                  Straw Man Misrepresents the role of AI in creative assistance. False Dichotomy Assumes one must either be visually skilled or not attempt visual media. Hasty Generalization Devalues “brilliant ideas” universally. Appeal to Ridicule Dismisses counterexamples via mocking tone rather than analysis. Tu Quoque-like Compares left vs. right AI use without addressing the core point about opportunity.


                  Your criticism is thoughtful and not without merit—but it’s wrapped in rhetoric that sometimes slips into oversimplification or misrepresentation of the opposing view. If your goal is to strengthen your argument or have a productive back-and-forth, refining those areas could help. Would you like to rewrite it in a way that keeps the spirit of your critique but sharpens its logic?

                  At this point you’re just arguing for arguments sake. You’re not wrong or right but instead muddying things. Saying it’ll be boring comics missed the entire point. Saying it is the same as google is pure ignorance of what it can do. But this goes to my point about how this stuff is all similar to anti immigrant mentality. The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they’ve bought into the hype and need to justify it.

      • @hansolo@lemmy.today
        link
        fedilink
        English
        93 days ago

        Because it’s a fear-mongering angle that still sells. AI has been a vehicle for scifi for so long that trying to convince Boomers that of won’t kill us all is the hard part.

        I’m a moderate user for code and skeptic of LLM abilities, but 5 years from now when we are leveraging ML models for groundbreaking science and haven’t been nuked by SkyNet, all of this will look quaint and silly.

  • @skisnow@lemmy.ca
    link
    fedilink
    English
    292 days ago

    What’s hilarious/sad is the response to this article over on reddit’s “singularity” sub, in which all the top comments are people who’ve obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don’t understand AI or “reasoning”. It’s a weird cult.

  • @RampantParanoia2365@lemmy.world
    link
    fedilink
    English
    25
    edit-2
    2 days ago

    Fucking obviously. Until Data’s positronic brains becomes reality, AI is not actual intelligence.

    AI is not A I. I should make that a tshirt.

  • @melsaskca@lemmy.ca
    link
    fedilink
    English
    102 days ago

    It’s all “one instruction at a time” regardless of high processor speeds and words like “intelligent” being bandied about. “Reason” discussions should fall into the same query bucket as “sentience”.

    • @MangoCats@feddit.it
      link
      fedilink
      English
      32 days ago

      My impression of LLM training and deployment is that it’s actually massively parallel in nature - which can be implemented one instruction at a time - but isn’t in practice.

    • El Barto
      link
      fedilink
      English
      303 days ago

      LLMs deal with tokens. Essentially, predicting a series of bytes.

      Humans do much, much, much, much, much, much, much more than that.

    • @skisnow@lemmy.ca
      link
      fedilink
      English
      132 days ago

      I hate this analogy. As a throwaway whimsical quip it’d be fine, but it’s specious enough that I keep seeing it used earnestly by people who think that LLMs are in any way sentient or conscious, so it’s lowered my tolerance for it as a topic even if you did intend it flippantly.

      • @GaMEChld@lemmy.world
        link
        fedilink
        English
        11 day ago

        I don’t mean it to extol LLM’s but rather to denigrate humans. How many of us are self imprisoned in echo chambers so we can have our feelings validated to avoid the uncomfortable feeling of thinking critically and perhaps changing viewpoints?

        Humans have the ability to actually think, unlike LLM’s. But it’s frightening how far we’ll go to make sure we don’t.

    • @SpaceCowboy@lemmy.ca
      link
      fedilink
      English
      63 days ago

      Yeah I’ve always said the the flaw in Turing’s Imitation Game concept is that if an AI was indistinguishable from a human it wouldn’t prove it’s intelligent. Because humans are dumb as shit. Dumb enough to force one of the smartest people in the world take a ton of drugs which eventually killed him simply because he was gay.

      • @crunchy@lemmy.dbzer0.com
        link
        fedilink
        English
        93 days ago

        I’ve heard something along the lines of, “it’s not when computers can pass the Turing Test, it’s when they start failing it on purpose that’s the real problem.”

      • @jnod4@lemmy.ca
        link
        fedilink
        English
        43 days ago

        I think that person had to choose between the drugs or hard core prison of the 1950s England where being a bit odd was enough to guarantee an incredibly difficult time as they say in England, I would’ve chosen the drugs as well hoping they would fix me, too bad without testosterone you’re going to be suicidal and depressed, I’d rather choose to keep my hair than to be horny all the time

      • @Zenith@lemm.ee
        link
        fedilink
        English
        33 days ago

        Yeah we’re so stupid we’ve figured out advanced maths, physics, built incredible skyscrapers and the LHC, we may as individuals be less or more intelligent but humans as a whole are incredibly intelligent