“TheFutureIsDesigned” bluechecks thusly:

You: takes 2 hours to read 1 book

Me: take 2 minutes to think of precisely the information I need, write a well-structured query, tell my agent AI to distribute it to the 17 models I’ve selected to help me with research, who then traverse approximately 1 million books, extract 17 different versions of the information I’m looking for, which my overseer agent then reviews, eliminates duplicate points, highlights purely conflicting ones for my review, and creates a 3-level summary.

And then I drink coffee for 58 minutes.

We are not the same.

For bonus points:

I want to live in the world of Hyperion, Ringworld, Foundation, and Dune.

You know, Dune.

(Via)

  • @swlabr@awful.systems
    link
    fedilink
    English
    8
    edit-2
    1 day ago

    take 2 minutes to think of precisely the information I need

    I can’t even put into words the full nonsense of this statement. How do you think this would work? This is not how learning works. This is not how research works. This is not how anything works.

    This part threw me as well. If you can think of it, why read for it? Didn’t make sense and so I stopped looking into this particular abyss until you pointed it out again.

    I think the only interpretation of what this person said that approaches some level of rationality on their part is essentially a form of confirmation bias. They aren’t thinking of information that is in the text, they are thinking “I want this text to confirm X for me”, then they prompt and get what they want. LLMs are biased to be people-pleasers and will happily spin whatever hallucinated tokens the user throws at them. That’s my best guess.

    That you didn’t think of the above just goes to show the failure of your unfeeble mind’s logic and reason to divine such a truth. Just kidding, sorta, in the sense that you can’t expect to understand an irrational thought process using rationality.

    But if it’s not that I’m still thrown.

    • @HedyL@awful.systems
      link
      fedilink
      English
      81 day ago

      They aren’t thinking of information that is in the text, they are thinking “I want this text to confirm X for me”, then they prompt and get what they want.

      I think it’s either that, or they want an answer they could impress other people with (without necessarily understanding it themselves).

      • @swlabr@awful.systems
        link
        fedilink
        English
        71 day ago

        Oh, that’s a good angle too. Prompt the LLM with “what insights does this book have about B2B sales” or something.