LOOK MAA I AM ON FRONT PAGE

  • JohnEdwa
    cake
    link
    fedilink
    English
    24
    edit-2
    4 days ago

    "It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, ‘that’s not thinking’." -Pamela McCorduck´.
    It’s called the AI Effect.

    As Larry Tesler puts it, “AI is whatever hasn’t been done yet.”.

    • kadup
      link
      fedilink
      English
      194 days ago

      That entire paragraph is much better at supporting the precise opposite argument. Computers can beat Kasparov at chess, but they’re clearly not thinking when making a move - even if we use the most open biological definitions for thinking.

      • @cyd@lemmy.world
        link
        fedilink
        English
        74 days ago

        By that metric, you can argue Kasparov isn’t thinking during chess, either. A lot of human chess “thinking” is recalling memorized openings, evaluating positions many moves deep, and other tasks that map to what a chess engine does. Of course Kasparov is thinking, but then you have to conclude that the AI is thinking too. Thinking isn’t a magic process, nor is it tightly coupled to human-like brain processes as we like to think.

        • kadup
          link
          fedilink
          English
          34 days ago

          By that metric, you can argue Kasparov isn’t thinking during chess

          Kasparov’s thinking fits pretty much all biological definitions of thinking. Which is the entire point.

    • @technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      16
      edit-2
      4 days ago

      I’m going to write a program to play tic-tac-toe. If y’all don’t think it’s “AI”, then you’re just haters. Nothing will ever be good enough for y’all. You want scientific evidence of intelligence?!?! I can’t even define intelligence so take that! \s

      Seriously tho. This person is arguing that a checkers program is “AI”. It kinda demonstrates the loooong history of this grift.

      • JohnEdwa
        cake
        link
        fedilink
        English
        16
        edit-2
        4 days ago

        It is. And has always been. “Artificial Intelligence” doesn’t mean a feeling thinking robot person (that would fall under AGI or artificial conciousness), it’s a vast field of research in computer science with many, many things under it.

        • Endmaker
          link
          fedilink
          English
          94 days ago

          ITT: people who obviously did not study computer science or AI at at least an undergraduate level.

          Y’all are too patient. I can’t be bothered to spend the time to give people free lessons.

          • @antonim@lemmy.dbzer0.com
            link
            fedilink
            English
            54 days ago

            Wow, I would deeply apologise on the behalf of all of us uneducated proles having opinions on stuff that we’re bombarded with daily through the media.

          • @Clent@lemmy.dbzer0.com
            link
            fedilink
            English
            34 days ago

            The computer science industry isn’t the authority on artificial intelligence it thinks it is. The industry is driven by a level of hubris that causes people to step beyond the bounds of science and into the realm of humanities without acknowledgment.

      • @LandedGentry@lemmy.zip
        link
        fedilink
        English
        6
        edit-2
        4 days ago

        Yeah that’s exactly what I took from the above comment as well.

        I have a pretty simple bar: until we’re debating the ethics of turning it off or otherwise giving it rights, it isn’t intelligent. No it’s not scientific, but it’s a hell of a lot more consistent than what all the AI evangelists espouse. And frankly if we’re talking about the ethics of how to treat something we consider intelligent, we have to go beyond pure scientific benchmarks anyway. It becomes a philosophy/ethics discussion.

        Like crypto it has become a pseudo religion. Challenges to dogma and orthodoxy are shouted down, the non-believers are not welcome to critique it.

    • @vala@lemmy.world
      link
      fedilink
      English
      8
      edit-2
      4 days ago

      Yesterday I asked an LLM “how much energy is stored in a grand piano?” It responded with saying there is no energy stored in a grad piano because it doesn’t have a battery.

      Any reasoning human would have understood that question to be referring to the tension in the strings.

      Another example is asking “does lime cause kidney stones?”. It didn’t assume I mean lime the mineral and went with lime the citrus fruit instead.

      Once again a reasoning human would assume the question is about the mineral.

      Ask these questions again in a slightly different way and you might get a correct answer, but it won’t be because the LLM was thinking.

      • @xthexder@l.sw0.com
        link
        fedilink
        English
        94 days ago

        I’m not sure how you arrived at lime the mineral being a more likely question than lime the fruit. I’d expect someone asking about kidney stones would also be asking about foods that are commonly consumed.

        This kind of just goes to show there’s multiple ways something can be interpreted. Maybe a smart human would ask for clarification, but for sure AIs today will just happily spit out the first answer that comes up. LLMs are extremely “good” at making up answers to leading questions, even if it’s completely false.

        • @Knock_Knock_Lemmy_In@lemmy.world
          link
          fedilink
          English
          24 days ago

          A well trained model should consider both types of lime. Failure is likely down to temperature and other model settings. This is not a measure of intelligence.

        • JohnEdwa
          cake
          link
          fedilink
          English
          2
          edit-2
          4 days ago

          Making up answers is kinda their entire purpose. LMMs are fundamentally just a text generation algorithm, they are designed to produce text that looks like it could have been written by a human. Which they are amazing at, especially when you start taking into account how many paragraphs of instructions you can give them, and they tend to rather successfully follow.

          The one thing they can’t do is verify if what they are talking about is true as it’s all just slapping words together using probabilities. If they could, they would stop being LLMs and start being AGIs.

      • @postmateDumbass@lemmy.world
        cake
        link
        fedilink
        English
        94 days ago

        Honestly, i thought about the chemical energy in the materials constructing the piano and what energy burning it would release.

        • @xthexder@l.sw0.com
          link
          fedilink
          English
          64 days ago

          The tension of the strings would actually be a pretty miniscule amount of energy too, since there’s very little stretch to a piano wire, the force might be high, but the potential energy/work done to tension the wire is low (done by hand with a wrench).

          Compared to burning a piece of wood, which would release orders of magnitude more energy.

      • @antonim@lemmy.dbzer0.com
        link
        fedilink
        English
        74 days ago

        But 90% of “reasoning humans” would answer just the same. Your questions are based on some non-trivial knowledge of physics, chemistry and medicine that most people do not possess.