Bonus issue:

This one is a little bit less obvious

  • AmbiguousProps
    link
    fedilink
    English
    453 months ago

    Why do LLMs obsess over making numbered lists? They seem to do that constantly.

    • @Tolookah@discuss.tchncs.de
      link
      fedilink
      English
      473 months ago

      Oh, I can help! 🎉

      1. computers like lists, they organize things.
      2. itemized things are better when linked! 🔗
      3. I hate myself a little for writing this out 😐
    • @coherent_domain@infosec.pub
      link
      fedilink
      English
      18
      edit-2
      2 months ago

      My conspricy theory is that early LLMs have a hard time figuring out the logical relation between sentenses, hence do not generate good transitions between sentences.

      I think bullet point might be manually tuned up by the developers, but not inheritly present in the model; because we don’t tend to see bullet points that much in normal human communications.

      • Possibly linux
        link
        fedilink
        English
        4
        edit-2
        2 months ago

        That’s not a bad theory especially since newer models don’t do it as often

    • @gamer@lemm.ee
      link
      fedilink
      English
      12 months ago

      Late but I’m pretty sure it’s a byproduct of the RHLF process used to train these types of models. Basically, they have a bunch of humans look at multiple outputs from the LLM and rate the best ones, and it turns out people find lists easier to understand than other styles (alternatively, the poor souls slaving away in the AI mines rating responses all day find it faster to understand a list than a paragraph through the blurry lens of mental fatigue)

  • kubica
    link
    fedilink
    183 months ago

    Lol, my brain is like, nope, I’m not even trying to read that.

    • LostXOR
      link
      fedilink
      23 months ago

      I think I lost a few brain cells reading it all the way through.

  • FQQD
    link
    fedilink
    English
    15
    edit-2
    3 months ago

    Wow, this just hurts. The “twice, I might add!” is sooooo fucking bad. I don’t have any words for this.

    • qazOP
      link
      fedilink
      English
      122 months ago

      People often use a ridiculous amount of emoji’s in their readme, perhaps seeing it was a README triggered something in the LLM to talk like a readme?

  • @Korne127@lemmy.world
    link
    fedilink
    English
    93 months ago

    I mean, even if it’s annoying someone obviously used AI, they probably still have that problem and just suck at communicating that themselves

    • qazOP
      link
      fedilink
      English
      123 months ago

      They don’t, because it’s not an actual issue for any human reading it. The README contains the data and the repo is just for coordination, but the LLM doesn’t understand that.

      • @Korne127@lemmy.world
        link
        fedilink
        English
        22 months ago

        Then… that’s so fucking weird, why would someone make that issue? I genuinely lack the understanding for how this could have happened in that case.

        • qazOP
          link
          fedilink
          English
          12 months ago

          I’m pretty sure it’s an automated system that makes these issues. The accounts looked like bots. However, that only makes it even weirder.

  • Possibly linux
    link
    fedilink
    English
    92 months ago

    There have been so many people filing AI generated security vulnerabilities