As the title says. I put the wrong value inside a clean up code and I wiped everything. I did not push any important work. I just want to cry but at least I can offer it to you.

Do not hesitate to push even if your project is in a broken state.

  • @theherk@lemmy.world
    link
    fedilink
    615 hours ago

    Some wisdom my dad shared with me decades ago: when you’ve lost everything and must rebuild, the rebuild is ALWAYS better. As a programmer for a very long time who has done what you did, I have found this to be true. So there is your silver lining.

  • ☂️-
    link
    fedilink
    51 day ago

    i sudo shutdown now the main production (remote) server a few times before, and ive been doing sshing into servers for a long time.

    there there 🫂 its ok. we all do this shit. you do have backups of course, right?

  • @wheezy@lemmy.ml
    link
    fedilink
    9
    edit-2
    2 days ago

    I did a “rm -rf *” in the wrong directory today.

    I got the absolutely beautiful “argument list too long” in return.

    I had a backup. But holy shit I’m glad the directory had thousands of files in it and nothing happened. First time I got that bash error and was happy.

    I usually have rm aliased to “trash” or whatever that cli based recycle bin is. But just installed a new OS and ran this on a NAS folder today by mistake.

    • @mic_check_one_two@lemmy.dbzer0.com
      link
      fedilink
      English
      71 day ago

      My dad once rm -rf’ed his company’s payroll server by accident. He was a database admin at the time. He was asked to make a quick update to something. Instead of running it as a transaction (which would have been reversible) he went “eh it’s a simple update.” He hit Enter after typing out the change for the one entry, and saw “26478 entries updated”. At that point, his stomach fell out of his asshole.

      The company was too cheap to commit to regular 3-2-1 backups, so the most recent backup he had was a manual quarterly backup from three months ago. Luckily, Payroll still had paper timesheets for the past month, so they were able to stick an intern on data entry and get people paid. So they just had a void for those two months in between the backup and the paper timesheets.

      It wasn’t a huge issue, except for the fact that one of their employees was on parole. The parole officer asked the company to prove that the employee was working when he said he was. The officer wanted records for, you guessed it, the past three months. At that point, the company had to publicly admit to the fuckup. My dad was asked to resign… But at least the company started funding regular 3-2-1 backups (right before his two week notice was up.)

    • @18107@aussie.zone
      link
      fedilink
      English
      27
      edit-2
      2 days ago

      git-fire

      git-fire is a Git plugin that helps in the event of an emergency by switching to the repository’s root directory, adding all current files, committing, and pushing commits and all stashes to a new branch (to prevent merge conflicts).”

      • Lovable Sidekick
        link
        fedilink
        English
        417 hours ago

        They wouldn’t push to main at the same time tho, they would push to the branches they’re working on. Unless their organization is very badly run, and then it’s probably already happened before just because it was Tuesday.

      • Richie Rich
        link
        fedilink
        61 day ago

        Who pushes to main? That branch should be protected! Who reviews the merge request?

  • Rolling Resistance
    link
    fedilink
    132 days ago

    Sorry this happened.

    Use it as an opportunity to learn how to better store and edit your code (e.g. a VCS and a smart-ish editor). For me, a simple Ctrl-Z would be enough to get my code back.

    • mel ♀OP
      link
      fedilink
      12 days ago

      I should have put it inside the post text but I used a wrong value inside a test

      • Except that one is automatically versioned and would have saved you this pain, and the other relies on you actively remembering to reflexively commit, and then do extra work to clean up your history before sharing, and once you push, it’s harder to change history and make a clean version to share.

        These days, there’s little excuse to not use COW with automated snapshots in addition to your normal, manual, VCS activities.

      • @HelloRoot@lemy.lol
        link
        fedilink
        English
        3
        edit-2
        2 days ago

        I’m paranoid. I have like 5 different ways (including 3-2-1 backups) to restore everything. COW fs is great for stuff that is not a git-able project.

  • @Valmond@lemmy.world
    link
    fedilink
    162 days ago

    Ya, push push push baby, do it on your own branch so that you can find your way back if needed.

    Especially when refactoring.

  • @tias@discuss.tchncs.de
    link
    fedilink
    62 days ago

    I keep my git clone in Dropbox so I can revert accidental delete and always have the most recent code on all devices without having to remember to commit and push. If it requires manual execution I wouldn’t really consider it a proper backup solution.

    • @dave@feddit.uk
      link
      fedilink
      English
      22 days ago

      I have been burnt by Dropbox in the past so now use Syncthing between my desktop, laptop, and a private remote server with file versioning turned on. Trivial to global ignore node_modules, and not giving data to a third party.

      It’s saved me on several occasions.

    • I use Dropbox too. Though I have to admit, when running code you sometimes have to pause sync otherwise it interferes with code execution. But definitely worth the peace of mind. Sometimes you don’t want to commit stuff until you’re sure that it works.