• AmbiguousProps
        link
        fedilink
        English
        2313 days ago

        That’s fair, but I think I’d rather self host an Ollama server and connect to it with an Android client in that case. Much better performance.

        • Greg Clarke
          link
          fedilink
          English
          413 days ago

          Yes, that’s my setup. But this will be useful for cases where internet connection is not reliable

        • @OhVenus_Baby@lemmy.ml
          link
          fedilink
          English
          212 days ago

          How is Ollama compared to GPT models? I used the paid tier for work and I’m curious how this stacks up.

          • AmbiguousProps
            link
            fedilink
            English
            112 days ago

            It’s decent, with the deepseek model anyway. It’s not as fast and has a lower parameter count though. You might just need to try it and see if it fits your needs or not.

        • Greg Clarke
          link
          fedilink
          English
          313 days ago

          Has this actually been done? If so, I assume it would only be able to use the CPU

          • @Euphoma@lemmy.ml
            link
            fedilink
            English
            713 days ago

            Yeah I have it in termux. Ollama is in the package repos for termux. The speed it generates does feel like cpu speed but idk