• @Aria@lemmygrad.ml
    link
    fedilink
    15 months ago

    It’s the 671B model that’s competitive with o1. So you need 16 80GB cards. The comments seem very happy with the smaller versions, and I’m going to try one now, but it doesn’t seem like anything you can run on a home computer with 4 4090s is going to be in the ballpark comparable to ChatGPT.

  • sunzu2
    link
    fedilink
    05 months ago

    But the new DeepSeek model comes with a catch if run in the cloud-hosted version—being Chinese in origin, R1 will not generate responses about certain topics like Tiananmen Square or Taiwan’s autonomy, as it must “embody core socialist values,” according to Chinese Internet regulations. This filtering comes from an additional moderation layer that isn’t an issue if the model is run locally outside of China.

    • @Grapho@lemmy.ml
      link
      fedilink
      2
      edit-2
      5 months ago

      What the fuck is it with westerners and trying racist shit like this every time a Chinese made tool or platform comes up?

      I stg if it had been developed by Jews in the 1920s the first thing they’d do would be to ask it about cooking with the blood of christian babies

  • There’s a lot of explaining to do for Meta, OpenAI, Claude and Google gemini to justify overpaying for their models now that there’s l a literal open source model that can do the basics.

    • Zement
      link
      fedilink
      15 months ago

      Yes GPT4All of you want to try for yourself without coding know how.

    • suokoOP
      link
      fedilink
      15 months ago

      I’m testing right now vscode+continue+ollama+gwen2.5-coder. With a simple GPU it’s already OK.

    • suokoOP
      link
      fedilink
      05 months ago

      You still need an expensive hardware to run it. Unless myceliumwebserver project will start

        • @Scipitie@lemmy.dbzer0.com
          link
          fedilink
          15 months ago

          How much vram does your TI pack? Is that the standard 8gb ddr6?

          I will because I’m surprised and impressed that a 14b model runs smoothly.

          Thanks for the insights!

          • @birdcat@lemmy.ml
            link
            fedilink
            25 months ago

            i dont even have a GPU and the 14b model runs at an acceptable speed. but yes, faster and bigger would be nice… or knowing how to distill the biggest one, cuz I only use it for something very specific.