• @LastYearsPumpkin@feddit.ch
    link
    fedilink
    English
    282 years ago

    Don’t use chatgpt as a source, there is no reason to trust anything it says.

    It might be right, it might have just thrown together words that sound right, or maybe it’s completely made up.

    • metaStatic
      link
      fedilink
      42 years ago

      it just guesses the next probable word. literally everything it says is made up.

      • @8ender@lemmy.world
        link
        fedilink
        English
        42 years ago

        Words are how we communicate knowledge so sometimes the most probable combinations of words end up being facts

      • Thales
        link
        fedilink
        English
        3
        edit-2
        2 years ago

        “ChatGPT, please provide your rebuttal to this statement about you: […]”

        Hey! That’s a common misconception. While I do predict the next word based on patterns in the data I was trained on, I’m not just making things up. I provide information and answers based on the vast amount of text I’ve been trained on. It’s more about recognizing patterns and providing coherent, relevant responses than just “guessing.” Cheers!

        • @sky@codesink.io
          link
          fedilink
          English
          72 years ago

          right, and they’re actually pretty bad at remembering facts, that’s why we have entire institutions dedicated to maintain accurate reference material!

          why do people throw all of this out the window for advice from a dumb program I’ll never understand

        • thbb
          link
          fedilink
          72 years ago

          Not really. We also have deductive capabilities (aka “system 2”) that enable us to ensure some level of proof over our statements.

      • SkaveRat
        link
        fedilink
        English
        22 years ago

        while it’s technically true that it “just predicts the next word”, it’s a very misleading argument to make.

        Computers are also “just some basic logic gates” and yet we can do complex stuff with them.

        Complex behaviour can result from simple things.

        Not defending the bullshit that LLMs generate, just to point out that you have to be careful with your arguments