• saltesc@lemmy.worldBanned
      link
      fedilink
      arrow-up
      23
      ·
      edit-2
      2 months ago

      I keep saying it for what it is, “genAI” is just Markov chains…AGAIN. And the first chain Markov ever invented was an language model back in ~1904, published in 1905.

      Every time in household IT history, people are fooled into thinking tech is doing magic intelligence stuff but it’s just a classic Markov chain. Something that was once done on paper but now ripped through 2025 processors.

      In no way does a single algorithm type fit the definition of artificial intelligence. It’s just simple mathematics that can now be done incredibly fast.

      All it does is mathematically calculate the likelihood of what’s next based on how things occur in the data it’s been given. It’s prediction to generate is just weighted values and the quality is entirely dependent on the historical data it’s referencing.

      What normally comes after A? According to data, B does 76% of the time. Choose B. What comes after B? C 78% of the time but S follows AB 98% of the time. Choose S. Be able to do this thousands of times a second aaaaand, bingo. Perceived “intelligence”.

      That’s literally it.

      Why is genAI so bad at its job? Because you can never get 100% for everything and the chain can steer down a wrong path based on a single mistake in one of the links. It’s why we call it probability and not fact. But there is no intelligence there to problem solve itself, just deeper and deeper data validation checks on the linear chain to prevent low quality routes. Checks done using Markov’s same fundamentals.

        • saltesc@lemmy.worldBanned
          link
          fedilink
          arrow-up
          6
          ·
          2 months ago

          Yep. I find people that understand what’s actually going on in the back end have much more successful results. They know to introduce their own conditions in the prompt that prevent common or expected failures. The chain can obviously not do this itself as it is not an AI.

      • fckreddit@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        This is what precisely what I have been saying for so long. Just because LLMs sound smart doesn’t mean they are. They don’t form world views, or even understand ideas or concepts. They are just glorified statistical parrots that predict the next word through a prob distribution.

  • jaybone@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 months ago

    Somewhat OT, I don’t quite get web3. The idea of decentralized sounds good, if we could get content back out of these select few walled gardens like fb and ig and such. But then they throw in all this blockchain and crypto bullshit.

    • squaresinger@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      2 months ago

      The point of web3 is to take a bunch of unrelated tech and brand it as “web3”, even though it has nothing to do with the web, just to attach it to some very popular, well-known branding that isn’t controlled by any single organization.

      It’s free and misleading marketing by slimy and untrustworthy people.