• @solsangraal@lemmy.zip
    link
    fedilink
    English
    1327 days ago

    it only takes a couple times of getting a made-up bullshit answer from chatgpt to learn your lesson of just skip asking chatgpt anything altogether

    • @HollowNaught@lemmy.world
      link
      fedilink
      English
      27 days ago

      I feel like a lot of people in this community underestimate the average person’s willingness to trust an AI. Over the past few months, every time I’ve seen a coworker ask something and search it up, I have never seen them click on a website to view the answer. They’ll always take what the AI summary tells them at face value

      Which is very scary

    • @SuperSaiyanSwag@lemmy.zip
      cake
      link
      fedilink
      English
      16 days ago

      My girlfriend gave me a mini heart attack when she told me that my favorite band broke up. Turns out it was chat gpt making shit up, came up with a random name for the final album too.

    • I stopped using it when I asked who I was and then it said I was a prolific author then proceeded to name various books I absolutely did not write.

    • @papalonian@lemmy.world
      link
      fedilink
      137 days ago

      I was using it to blow through an online math course I’d ultimately decided I didn’t need but didn’t want to drop. One step of a problem I had it solve involved finding the square root of something; it spat out a number that was kind of close, but functionally unusable. I told it it made a mistake three times and it gave a different number each time. When I finally gave it the right answer and asked, “are you running a calculation or just making up a number” it said that if I logged in, it would use real time calculations. Logged in on a different device, asked the same question, it again made up a number, but when I pointed it out, it corrected itself on the first try. Very janky.

      • @stratoscaster@lemmy.world
        link
        fedilink
        77 days ago

        ChatGPT doesn’t actually do calculations. It can generate code that will actually calculate the answer, or provide a formula, but ChatGPT cannot do math.

      • OpenStars
        link
        fedilink
        English
        37 days ago

        So it forced you to ask it many times? Now imagine that you paid for it each time. For the creator then, mission fucking accomplished.

    • @sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      12
      edit-2
      7 days ago

      Or you could learn to use the tool better and ask better questions. It’s pretty decent at some things, absolutely terrible for others.

      Asking to explain something like shorting a stock is one of the better uses, since there are tons of relevant posts explaining exactly that.

      • Match!!
        link
        fedilink
        English
        37 days ago

        It’s a good tool so long as there are already better ways to get your answer

        • It’s good if the answers exist, but you don’t know how to find them. They’re like search engines that can generated related terms. or regurgitate common answers.

          I find LLMs help me use existing search engines a lot better, because I can usually get it to spit out domain-specific terms to a more general query.

      • @Godric@lemmy.world
        link
        fedilink
        37 days ago

        I’m dying laughing at the “NOOOOO AI BAAAAAAD” folks downvoting you for being absolutely correct on how to use the tool properly XD

        • Eh, it comes w/ the territory. Lemmy is generally anti-LLM, and this is a post that would specifically trigger people who hate LLMs.

          I just hope a few people stop and think about whether their distaste for LLMs is reasonable or just bandwagoning.

          • @Godric@lemmy.world
            link
            fedilink
            27 days ago

            Yeah, my big gripe with Lemmy is the hivemind that decides the " ideologically correct" way to post. One can hope one reaches an open mind at some point, but such is social media :/

            • @sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              6
              edit-2
              7 days ago

              That’s true of all social media. It turns out, collecting information into groups tends to attract people w/ strong opinions about that type of information. If you have two groups, one very positive about something and one very negative, they’ll form separate groups because people prefer validation to conflict. It’s the natural consequence of social media, people like to form groups w/ like-minded people.

              I didn’t come to Lemmy because I disliked the Reddit hive-mind issue, I came because I disliked how they treated third party developers and volunteer moderators. I self-corrected for Reddit’s hive-mind by joining a bunch of subreddits that attracted different perspectives (i.e. some for leftists, some for conservatives, some for anarchists, etc) so I’d hopefully get a decent mix, and I do the same here on Lemmy (though it seems Lemmy is a bit more leftist than Reddit, so there’s a bit less diversity in politics at least). I do the same for news sources and in my use of LLMs (ask it to find issues w/ a previous answer it gave).

              So I sometimes post alternative viewpoints in threads like these to hopefully give someone a chance to reconsider their opinions. Sometimes those comments get traction, sometimes they don’t, but hopefully someone down the line will see them and appreciate it.

    • @stratoscaster@lemmy.world
      link
      fedilink
      27 days ago

      I’ve only really found it useful when you provide the source of information/data to your prompt. E.g. say you want to convert one data format to another like table data into JSON

      It works very consistently in those types of use cases. Otherwise it’s a dice roll.

    • Ricky Rigatoni
      link
      fedilink
      17 days ago

      That’s what people get when they ask me questions too but they still bother me all the time so clearly that’s not going to work.