Bonus issue:

This one is a little bit less obvious

  • AmbiguousProps
    link
    fedilink
    English
    431 month ago

    Why do LLMs obsess over making numbered lists? They seem to do that constantly.

    • @Tolookah@discuss.tchncs.de
      link
      fedilink
      English
      441 month ago

      Oh, I can help! 🎉

      1. computers like lists, they organize things.
      2. itemized things are better when linked! 🔗
      3. I hate myself a little for writing this out 😐
    • @coherent_domain@infosec.pub
      link
      fedilink
      English
      17
      edit-2
      1 month ago

      My conspricy theory is that early LLMs have a hard time figuring out the logical relation between sentenses, hence do not generate good transitions between sentences.

      I think bullet point might be manually tuned up by the developers, but not inheritly present in the model; because we don’t tend to see bullet points that much in normal human communications.

      • Possibly linux
        link
        fedilink
        English
        4
        edit-2
        1 month ago

        That’s not a bad theory especially since newer models don’t do it as often

    • @gamer@lemm.ee
      link
      fedilink
      English
      11 month ago

      Late but I’m pretty sure it’s a byproduct of the RHLF process used to train these types of models. Basically, they have a bunch of humans look at multiple outputs from the LLM and rate the best ones, and it turns out people find lists easier to understand than other styles (alternatively, the poor souls slaving away in the AI mines rating responses all day find it faster to understand a list than a paragraph through the blurry lens of mental fatigue)

  • kubica
    link
    fedilink
    181 month ago

    Lol, my brain is like, nope, I’m not even trying to read that.

    • LostXOR
      link
      fedilink
      21 month ago

      I think I lost a few brain cells reading it all the way through.

    • qazOP
      link
      fedilink
      English
      121 month ago

      People often use a ridiculous amount of emoji’s in their readme, perhaps seeing it was a README triggered something in the LLM to talk like a readme?

  • FQQD!
    link
    fedilink
    English
    13
    edit-2
    1 month ago

    Wow, this just hurts. The “twice, I might add!” is sooooo fucking bad. I don’t have any words for this.

  • Possibly linux
    link
    fedilink
    English
    91 month ago

    There have been so many people filing AI generated security vulnerabilities

  • @Korne127@lemmy.world
    link
    fedilink
    English
    81 month ago

    I mean, even if it’s annoying someone obviously used AI, they probably still have that problem and just suck at communicating that themselves

    • qazOP
      link
      fedilink
      English
      121 month ago

      They don’t, because it’s not an actual issue for any human reading it. The README contains the data and the repo is just for coordination, but the LLM doesn’t understand that.

      • @Korne127@lemmy.world
        link
        fedilink
        English
        21 month ago

        Then… that’s so fucking weird, why would someone make that issue? I genuinely lack the understanding for how this could have happened in that case.

        • qazOP
          link
          fedilink
          English
          11 month ago

          I’m pretty sure it’s an automated system that makes these issues. The accounts looked like bots. However, that only makes it even weirder.