• TheTechnician27
    link
    fedilink
    English
    2119 days ago

    It’s a two-pass solution, but it makes it a lot more reliable.

    So your technique to “make it a lot more reliable” is to ask an LLM a question, then run the LLM’s answer through an equally unreliable LLM to “verify” the answer?

    We’re so doomed.

    • @Apepollo11@lemmy.world
      link
      fedilink
      English
      3
      edit-2
      19 days ago

      Give it a try.

      The key is in the different prompts. I don’t think I should really have to explain this, but different prompts produce different results.

      Ask it to create something, it creates something.

      Ask it to check something, it checks something.

      Is it flawless? No. But it’s pretty reliable.

      It’s literally free to try it now, using ChatGPT.

      • TheTechnician27
        link
        fedilink
        English
        1119 days ago

        I don’t think I should really have to explain this, but different prompts produce different results.

        Ron Swanson saying "I know more thab you" to a home improvement store employee

        • @Apepollo11@lemmy.world
          link
          fedilink
          English
          219 days ago

          Hey, maybe you do.

          But I’m not arguing anything contentious here. Everything I’ve said is easily testable and verifiable.