• 0 Posts
  • 105 Comments
Joined 3 months ago
cake
Cake day: March 31st, 2025

help-circle
rss

  • You don’t read books for that though. Does this person think books are just sequences of facts you’re supposed to memorise?

    I think i have something shaped like counterexample. Large literature reviews and compilations of data tables and such can work like this, and grepping them will get you a feel what is possible and a single practical example per, but even then you’re supposed to read them in order to get not only what is possible, but also what is not (or at least what wasn’t tested) and what fails and how and why. Actually reading through also gives you a bigger picture and allows for drawing your own conclusions ofc like you notice

    Don’t you ever read something and go “oh, I never even thought about this”, “I didn’t know this was a problem”, “I wouldn’t have thought of this myself”. If not then what the fuck are you reading??

    even then feeding them to chatbot is valleybrain nonsense because grep will be more than enough and much faster, and you naturally know what’s inside only after reading it

    even then, just having right snippet is not enough because presumably result would be only apparent after testing irl, or perhaps building a model or simulation or what have you. even then, getting to the point where you need to do any of that requires degree of curiosity and ability to put information from different sources together that would exclude promptfondlers. it’s like these people try on purpose to think as little as possible


  • solzhenitsyn is pretty sus too, with all that him being orthodox fundamentalist, fan of tsar, panslavic antisemite, 2000s putin fan (died three days into russian invasion of georgia), proponent of enlargement of russia to include “sufficiently russified” parts of belarus, ukraine and kazakhstan and therefore opponent of ukranian independence; also

    Solzhenitsyn made a speaking tour after Francisco Franco’s death, and “told liberals not to push too hard for changes because Spain had more freedoms now than the Soviet Union had ever known.”

    In 1983 he met Margaret Thatcher and told her “the German army could have liberated the Soviet Union from Communism but Hitler was stupid and did not use this weapon”

    Regarding Ukraine he wrote “All the talk of a separate Ukrainian people existing since something like the ninth century and possessing its own non-Russian language is recently invented falsehood” and “we all sprang from precious Kiev”.

    Solzhenitsyn was a supporter of the Vietnam War and referred to the Paris Peace Accords as ‘shortsighted’ and a ‘hasty capitulation’.

    Solzhenitsyn was critical of NATO’s eastward expansion towards Russia’s borders and described the NATO bombing of Yugoslavia as “cruel” […] Solzhenitsyn accused NATO of trying to bring Russia under its control; he stated that this was visible because of its “ideological support for the ‘colour revolutions’ and the paradoxical forcing of North Atlantic interests on Central Asia”

    (all from wikipedia entry on him)

    it’s a little wonder that american altright embraced his writings






  • it’s maybe because chatbots incorporate, accidentally or not, elements of what makes gambling addiction work on humans https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/

    the gist:

    There’s a book on this — Hooked: How to Build Habit-Forming Products by Nir Eyal, from 2014. This is the how-to on getting people addicted to your mobile app. [Amazon UK, Amazon US]

    Here’s Eyal’s “Hook Model”:

    First, the trigger is what gets you in. e.g., you see a chatbot prompt and it suggests you type in a question. Second is the action — e.g., you do ask the bot a question. Third is the reward — and it’s got to be a variable reward. Sometimes the chatbot comes up with a mediocre answer — but sometimes you love the answer! Eyal says: “Feedback loops are all around us, but predictable ones don’t create desire.” Intermittent rewards are the key tool to create an addiction. Fourth is the investment — the user puts time, effort, or money into the process to get a better result next time. Skin in the game gives the user a sunk cost they’ve put in. Then the user loops back to the beginning. The user will be more likely to follow an external trigger — or they’ll come to your site themselves looking for the dopamine rush from that variable reward.

    Eyal said he wrote Hooked to promote healthy habits, not addiction — but from the outside, you’ll be hard pressed to tell the difference. Because the model is, literally, how to design a poker machine. Keep the lab rats pulling the lever.

    chatbots users also are attracted to their terminally sycophantic and agreeable responses, and also some users form parasocial relationships with motherfucking spicy autocomplete, and also chatbots were marketed to management types as a kind of futuristic status symbol that if you don’t use it you’ll fall behind and then you’ll all see. people get mixed gambling addiction/fomo/parasocial relationship/being dupes of multibillion dollar advertising scheme and that’s why they get so unserious about their chatbot use

    and also separately core of openai and anthropic and probably some other companies are made from cultists that want to make machine god, but it’s entirely different rabbit hole

    like with any other bubble, money for it won’t last forever. most recently disney sued midjourney for copyright infringement, and if they set legal precedent, they might take wipe out all of these drivel making machines for good


  • iirc L-aminoacids and D-sugars, that is these observed in nature, are very slightly more stable than the opposite because of weak interaction

    probably it’s just down to a specific piece of quartz or soot that got lucky and chiral amplification gets you from there

    also it’s not physics, or more precisely it’s a very physicy subbranch of chemistry, and it’s done by chemists because physicists suck at doing chemistry for some reason (i’ve seen it firsthand)






  • taking a couple steps back and looking at bigger picture, something that you might have never done in your entire life guessing by tone of your post, people want to automate things that they don’t want to do. nobody wants to make elaborate spam that will evade detection, but if you can automate it somebody will use it this way. this is why spam, ads, certain kinds of propaganda and deepfakes are one of big actual use cases of genai that likely won’t go away (isn’t future bright?)

    this is tied to another point. if a thing requires some level of skill to make, then naturally there are some restraints. in pre-slopnami times, making a deepfake useful in black propaganda would require co-conspirator that has both ability to do that and correct political slant, and will shut up about it, and will have good enough opsec to not leak it unintentionally. maybe more than one. now, making sorta-convincing deepfakes requires involving less people. this also includes things like nonconsensual porn, for which there are less barriers now due to genai

    then, again people automate things they don’t want to do. there are people that do like coding. then also there are Idea Men butchering codebases trying to vibecode, while they don’t want to and have no inclination for or understanding of coding and what it takes, and what should result look like. it might be not a coincidence that llms mostly charmed managerial class, which resulted in them pushing chatbots to automate away things they don’t like or understand and likely have to pay people money for, all while chatbot will never say such sacrilegious things like “no” or “your idea is physically impossible” or “there is no reason for any of this”. people who don’t like coding, vibecode. people who don’t like painting, generate images. people who don’t like understanding things, cram text through chatbots to summarize them. maybe you don’t see a problem with this, but it’s entirely a you problem

    this leads to three further points. chatbots allow for low low price of selling your thoughts to saltman &co offloading all your “thinking” to them. this makes cheating in some cases exceedingly easy, something that schools have to adjust to, while destroying any ability to learn for students that use them this way. another thing is that in production chatbots are virtual dumbasses that never learn, and seniors are forced to babysit them and fix their mistakes. intern at least learns something and won’t repeat that mistake again, chatbot will fall in the same trap right when you run out of context window. this hits all major causes of burnout at once, and maybe senior will leave. then what? there’s no junior to promote in their place, because junior was replaced by a chatbot.

    this all comes before noticing little things like multibillion dollar stock bubble tied to openai, or their mid-sized-euro-country sized power demands, or whatever monstrosities palantir is cooking, and a couple of others that i’m surely forgetting right now

    and also

    Is the backlash due to media narratives about AI replacing software engineers?

    it’s you getting swept in outsized ad campaign for most bloated startup in history, not “backlash in media”. what you see as “backlash” is everyone else that’s not parroting openai marketing brochure

    While I don’t defend them,

    are you suure

    e: and also, lots of these chatbots are used as accountability sinks. sorry nothing good will ever happen to you because Computer Says No (pay no attention to the oligarch behind the curtain)

    e2: also this is partially side effect of silicon valley running out of ideas after crypto crashed and burned, then metaverse crashed and burned, and also after all this all of these people (the same people who ran crypto before, including altman himself) and money went to pump next bubble, because they can’t imagine anything else that will bring them that promised infinite growth, and they having money is result of ZIRP that might be coming to end and there will be fear and loathing because vcs somehow unlearned how to make money