• @supersquirrel@sopuli.xyz
    link
    fedilink
    English
    277 days ago

    In 30 years the world will be an ecological wasteland from all the energy usage we spent pursuing dumb shit hype like “AI”.

      • @frezik@midwest.social
        link
        fedilink
        English
        6
        edit-2
        7 days ago

        There are local LLMs, they’re just less powerful. Sometimes, they do useful things.

        The human brain uses around 20W of power. Current models are obviously using orders of magnitude more than that to get substantially worse results. I don’t think power usage and results are going to converge enough before the money people decide AI isn’t going to be profitable.

        • @jj4211@lemmy.world
          cake
          link
          fedilink
          English
          57 days ago

          The power consumption of the brain doesn’t really indicate anything about what we can expend on LLMs… Our brains are not just biological implementation of the stuff done with LLMs.

          • @frezik@midwest.social
            link
            fedilink
            English
            77 days ago

            It gives us an idea of what’s possible in a mechanical universe. It’s possible an artificial human level consciousness and intelligence will use less power than that, or maybe somewhat more, but it’s a baseline that we know exists.

            • @spicehoarder@lemm.ee
              link
              fedilink
              English
              47 days ago

              You’re making a lot of assumptions. One of them being that the brain is more efficient in terms of compute per watt compared to our current models. I’m not convinced that’s true. Especially for specialized applications. Even if we brought power usage below 20 watts, the reason we currently use more is because we can, not that each model is becoming more and more bloated.

            • @Tryenjer@lemmy.world
              link
              fedilink
              English
              4
              edit-2
              6 days ago

              Yeah, but a LLM has little to do with a biological brain.

              I think Brain-Computer Interfaces (BCIs) will be the real deal.

      • @Buddahriffic@lemmy.world
        link
        fedilink
        English
        47 days ago

        I was thinking in a different direction, that LLMs probably won’t be the pinnacle of AI, considering they aren’t really intelligent.

      • @menas@lemmy.wtf
        link
        fedilink
        English
        27 days ago

        Assuming they would be enough food to maintain and fix that hardware, I’m not confident that we will have enough electricity to run LLM on massive scale

  • @TheImpressiveX@lemm.ee
    link
    fedilink
    English
    1488 days ago

    Look, I’m not robophobic. Some of my best friends are cyborgs. I just don’t want them living in my neighborhood, you know?

  • @finitebanjo@lemmy.world
    link
    fedilink
    English
    428 days ago

    Let’s not pretend statistical models are approaching humanity. The companies who make these statistical model algorithms proved they couldn’t in 2020 by OpenAI and also 2023 DeepMind papers they published.

    To reiterate, with INFINITE DATA AND COMPUTE TIME the models cannot approach human error rates. It doesn’t think, it doesn’t emulate thinking, it statistically resembles thinking to some number below 95% and completely and totally lacks permanence in it’s statistical representation of thinking.

      • @AppleTea@lemmy.zip
        link
        fedilink
        English
        57 days ago

        If modern computers can reproduce sentience, then so can older computers. Thats just how general computing is. You really gonna claim magnetic tape can think? That punch-cards and piston transistors can produce the same phenomenon as tens of billions of living brain cells?

          • @AppleTea@lemmy.zip
            link
            fedilink
            English
            27 days ago

            Slightly yeah, but I’m still overall pretty skeptical. We still don’t really understand consciousness. It’d certainly be convenient if the calculating machines we understand and have everywhere could also “do” whatever it is that causes consciousness… but it doesn’t seem particularly likely.

    • Log in | Sign up
      link
      fedilink
      English
      67 days ago

      Ten years ago I was certain that a natural language voice interface to a computer was going to stay science fiction permanently. I was wrong. In ten years time you may also be wrong.

      • @finitebanjo@lemmy.world
        link
        fedilink
        English
        27 days ago

        Well, if you want one that’s 98% accurate then you were actually correct that it’s science fiction for the foreseeable future.

        • Log in | Sign up
          link
          fedilink
          English
          17 days ago

          And yet I just forsaw a future in which it wasn’t. AI has already exceeded Trump levels of understanding, intelligence and truthfulness. Why wouldn’t it beat you or I later? Exponential growth in computing power and all that.

          • @finitebanjo@lemmy.world
            link
            fedilink
            English
            5
            edit-2
            7 days ago

            The diminishing returns from the computing power scale much faster than the very static rate (and in many sectors plateauing rate) of growth in computing power, but if you believe OpenAI and Deepmind then they’ve already proven INFINITE processing power cannot reach it from their studies in 2020 and also in 2023.

            They already knew it wouldn’t succeed, they always knew, and they told everyone, but we’re still surrounded by people like you being grifted by it all.

            EDIT: I must be talking to a fucking bot because I already linked those scientific articles earlier, too.

            • Log in | Sign up
              link
              fedilink
              English
              5
              edit-2
              7 days ago

              Thanks for the abuse. I love it when I’m discussing something with someone and they start swearing at me and calling me names because I disagree. Really makes it fun. /s You can fuck right off yourself too, you arrogant tool.

    • @Gorilladrums@lemmy.world
      link
      fedilink
      English
      47 days ago

      I think most people understand that these LLM cannot think or reason, they’re just really good tools that can analyze data, recognize patterns, and generate relevant responses based on parameters and context. The people who treat LLM chatbot like they’re people have much deeper issues than just ignorance.

      • @finitebanjo@lemmy.world
        link
        fedilink
        English
        67 days ago

        Then you clearly haven’t been paying attention, because just as zealously as you defend it’s nonexistent use cases there are people defending the idea that it operates similar to how a human or animal thinks.

        • @Gorilladrums@lemmy.world
          link
          fedilink
          English
          27 days ago

          My point is that those people are a very small minority, and they suffer from issues that go beyond their ignorance of these how these models work.

          • @finitebanjo@lemmy.world
            link
            fedilink
            English
            37 days ago

            I think they’re more common than you realize. I think people ignorance of how these models work is the commonly held stance for the general public.

            • @Gorilladrums@lemmy.world
              link
              fedilink
              English
              17 days ago

              You’re definitely correct that most people are ignorant on these models work. I think most people understand these models aren’t sentient, but even among those who do, they don’t become emotionally attached to these models. I’m just saying that the people who end up developing feelings for chatbots go beyond ignorance. They have issues that require years of therapy.

        • @Genius@lemmy.zip
          link
          fedilink
          English
          17 days ago

          The difference is that the brain is recursive while these models are linear, but the fundamental structure is similar.

      • @iii@mander.xyz
        link
        fedilink
        English
        27 days ago

        The people who treat LLM chatbot like they’re people have much deeper issues than just ignorance.

        I don’t know if it’s an urban myth, but I’ve heard about 20% of LLM inference time and electricity is being spend on “hello” and “thank you” prompts. :)

  • @Cornelius_Wangenheim@lemmy.world
    link
    fedilink
    English
    428 days ago

    It’s already happening to me, but it’s over things like privacy, not recording every bit of your life for social media and kids blowing crazy amounts of money on F2P games.

    • @thallamabond@lemmy.world
      link
      fedilink
      English
      118 days ago

      What’s all this about having to accept NEW TOS for Borderlands 2. I purchased the game five years ago, but if I want to play today i have to accept a greater loss of privacy!

      When I was young you would find out about a video game from the movies! And they were complete! Any you couldn’t take the servers offline, because they didn’t exist!

      But for real, fuck Randy Pitchford

          • @bitjunkie@lemmy.world
            link
            fedilink
            English
            47 days ago

            Black people saying it with an A as in rap music is generally considered a camaraderie thing, as opposed to white people saying it with an R is considered a racist thing. White people aren’t supposed to say it at all, but it’s MUCH less acceptable in the latter pronunciation.

              • Jyek
                link
                fedilink
                English
                15
                edit-2
                6 days ago

                Black folks often use the N word casually to refer to each other as a form of taking back the word’s meaning. It used to be used exclusively in a racist fashion. The primary difference is that with the African American accent, the ending sound -ER is changed to more of an -UH sound. Sometimes, rarely and depending on the context, it is allowable for non-black people to say it with this accented pronunciation. But under no circumstances is it in good taste to use the original -ER ending to refer to a black person as a non-black person, that form is only used as a slur. When people refer to the “Hard R”, this is what they are talking about, the difference between the accented pronunciation as slang vs the original pronunciation intended as a slur.

    • notabot
      link
      fedilink
      English
      178 days ago

      24 YEARS AGO!

      /me crumbles to dust.

      I refuse to believe that was almost a quarter of a century ago.

  • @throwawayacc0430@sh.itjust.works
    link
    fedilink
    English
    297 days ago

    Step 1: Give Robots Voting Rights

    Step 2: ???

    Step 3: Plot twist, all those Robots are actually under direct control of the Evil Corporation Inc. and they already won every future election.

    Long Live the Cyberlife CEO!