It’s not always easy to distinguish between existentialism and a bad mood.

  • 10 Posts
  • 143 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
rss

  • (No spoiler tags because it’s just background lore for Dune that’s very tangential to the main plot)

    Dune - after catastrophic wars between humans and AIs, computers are forbidden.

    That’s a retcon from the incredibly shit Dune-quel books from like 15 years after the original author had died. The first Dune was written well before computers as we know them would come in vogue, and the Butlerian Jihad was meant to be a sweeping cultural revolution against the stranglehold that automated decision-making had achieved over society, fought not against off-brand terminators but the entrenched elites that monopolized access to the setting’s equivalent to AI.

    The inciting incident semi-canonically (via the Dune Encyclopedia) I think was some sort of robo-nurse casually euthanizing Serena Butler’s newborn baby, because of some algorithmic verdict that keeping it alive didn’t square with optimal utilitarian calculus.

    tl;dr: The Butlerian Jihad originally seemed to be way more about against-the-walling the altmans and the nadellas and undoing the societal damage done by the proliferation of sfba rationalism, than it was about fighting epic battles against AI controlled mechs.





  • It should be noted that the only person to lose his life in the article was because the police, who were explicitly told to be ready to use non-lethal means to subdue him because he was in the middle of a mental episode, immediately gunned him down when they saw him coming at them with a kitchen knife.

    But here’s the thrice cursed part:

    “You want to know the ironic thing? I wrote my son’s obituary using ChatGPT,” Mr. Taylor said. “I had talked to it for a while about what had happened, trying to find more details about exactly what he was going through. And it was beautiful and touching. It was like it read my heart and it scared the shit out of me.”






  • He claims he was explaining what others believe not what he believes

    Others as in specifically his co-writer for AI2027 Daniel Kokotlajo, the actual ex-OpenAI researcher.

    I’m pretty annoyed at having this clip spammed to several different subreddits, with the most inflammatory possible title, out of context, where the context is me saying “I disagree that this is a likely timescale but I’m going to try to explain Daniel’s position” immediately before. The reason I feel able to explain Daniel’s position is that I argued with him about it for ~2 hours until I finally had to admit it wasn’t completely insane and I couldn’t find further holes in it.

    Pay no attention to this thing we just spent two hours exhaustively discussing that I totally wasn’t into, it’s not really relevant context.

    Also the title is inflammatory only in the context of already knowing him for a ridiculous AI doomer, otherwise it’s fine. Inflammatory would be calling the video economically illiterate bald person thinks evaluations force-buy car factories, China having biomedicine research is like Elon running SpaceX .







  • Here’s the full text:

    Fake radical honesty: when a dishonest person self-discloses taboo or undesirable things about themselves, but then omits the worst thing or things. They make themselves look honest and they’re not. This nasty trick ruined my life once. It occurs to me that this ploy may have been used to cover up the miricult scandal (https://archive.is/miricult.com) after a discussion with someone about what happened. A friend said something like that they’d looked into this and the people involved confessed, but only one minor was molested. For some reason this resulted in increased trust. It should not have. Have you seen fake radical honesty anywhere?

    For someone not steeped into the lore, why is this important?




  • The first prompt programming libraries start to develop, along with the first bureaucracies.

    I went three layers deep in his references and his references’ references to find out what the hell prompt programming is supposed to be, ended up in a gwern footnote:

    It's the ideologized version of You're Prompting It Wrong. Which I suspected but doubted, because why would they pretend that LLMs being finicky and undependable unless you luck into very particular ways of asking for very specific things is a sign that they're doing well.

    gwern wrote:

    I like “prompt programming” as a description of writing GPT-3 prompts because ‘prompt’ (like ‘dynamic programming’) has almost purely positive connotations; it indicates that iteration is fast as the meta-learning avoids the need for training so you get feedback in seconds; it reminds us that GPT-3 is a “weird machine” which we have to have “mechanical sympathy” to understand effective use of (eg. how BPEs distort its understanding of text and how it is always trying to roleplay as random Internet people); implies that prompts are programs which need to be developed, tested, version-controlled, and which can be buggy & slow like any other programs, capable of great improvement (and of being hacked); that it’s an art you have to learn how to do and can do well or poorly; and cautions us against thoughtless essentializing of GPT-3 (any output is the joint outcome of the prompt, sampling processes, models, and human interpretation of said outputs).