There’s a lot of explaining to do for Meta, OpenAI, Claude and Google gemini to justify overpaying for their models now that there’s l a literal open source model that can do the basics.
i dont even have a GPU and the 14b model runs at an acceptable speed. but yes, faster and bigger would be nice… or knowing how to distill the biggest one, cuz I only use it for something very specific.
There’s a lot of explaining to do for Meta, OpenAI, Claude and Google gemini to justify overpaying for their models now that there’s l a literal open source model that can do the basics.
Yes GPT4All of you want to try for yourself without coding know how.
I’m testing right now vscode+continue+ollama+gwen2.5-coder. With a simple GPU it’s already OK.
You still need an expensive hardware to run it. Unless myceliumwebserver project will start
Removed by mod
How much vram does your TI pack? Is that the standard 8gb ddr6?
I will because I’m surprised and impressed that a 14b model runs smoothly.
Thanks for the insights!
i dont even have a GPU and the 14b model runs at an acceptable speed. but yes, faster and bigger would be nice… or knowing how to distill the biggest one, cuz I only use it for something very specific.