Bonus issue:

This one is a little bit less obvious

  • @coherent_domain@infosec.pub
    link
    fedilink
    English
    17
    edit-2
    1 month ago

    My conspricy theory is that early LLMs have a hard time figuring out the logical relation between sentenses, hence do not generate good transitions between sentences.

    I think bullet point might be manually tuned up by the developers, but not inheritly present in the model; because we don’t tend to see bullet points that much in normal human communications.

    • Possibly linux
      link
      fedilink
      English
      4
      edit-2
      1 month ago

      That’s not a bad theory especially since newer models don’t do it as often