I’ve just re-discovered ollama and it’s come on a long way and has reduced the very difficult task of locally hosting your own LLM (and getting it running on a GPU) to simply installing a deb! It also works for Windows and Mac, so can help everyone.

I’d like to see Lemmy become useful for specific technical sub branches instead of trying to find the best existing community which can be subjective making information difficult to find, so I created !Ollama@lemmy.world for everyone to discuss, ask questions, and help each other out with ollama!

So, please, join, subscribe and feel free to post, ask questions, post tips / projects, and help out where you can!

Thanks!

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    edit-2
    5 days ago

    TBH you should fold this into localllama? Or open source AI?

    I have very mixed (mostly bad) feelings on ollama. In a nutshell, they’re kinda Twitter attention grabbers that give zero credit/contribution to the underlying framework (llama.cpp). And that’s just the tip of the iceberg, they’ve made lots of controversial moves, and it seems like they’re headed for commercial enshittification.

    They’re… slimy.

    They like to pretend they’re the only way to run local LLMs and blot out any other discussion, which is why I feel kinda bad about a dedicated ollama community.

    It’s also a highly suboptimal way for most people to run LLMs, especially if you’re willing to tweak.

    I would always recommend Kobold.cpp, tabbyAPI, ik_llama.cpp, Aphrodite, LM Studio, the llama.cpp server, sglang, the AMD lemonade server, any number of backends over them. Literally anything but ollama.


    …TL;DR I don’t the the idea of focusing on ollama at the expense of other backends. Running LLMs locally should be the community, not ollama specifically.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        5 days ago

        Totally depends on your hardware, and what you tend to ask it. What are you running? What do you use it for? Do you prefer speed over accuracy?

        • southernbeaver@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          My HomeAssistant is running on Unraid but I have an old NVIDIA Quadro P5000. I really want to run a vision model so that it can describe who is at my doorbell.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            5 days ago

            OK.

            Then LM Studio. With Qwen3 30B IQ4_XS, low temperature MinP sampling.

            That’s what I’m trying to say though, there is no one click solution, that’s kind of a lie. LLMs work a bajillion times better with just a little personal configuration. They are not magic boxes, they are specialized tools.

            Random example: on a Mac? Grab an MLX distillation, it’ll be way faster and better.

            Nvidia gaming PC? TabbyAPI with an exl3. Small GPU laptop? ik_llama.cpp APU? Lemonade. Raspberry Pi? That’s important to know!

            What do you ask it to do? Set timers? Look at pictures? Cooking recipes? Search the web? Look at documents? Do you need stuff faster or accurate?

            This is one reason why ollama is so suboptimal, with the other being just bad defaults (Q4_0 quants, 2048 context, no imatrix or anything outside GGUF, bad sampling last I checked, chat template errors, bugs with certain models, I can go on). A lot of people just try “ollama run” I guess, then assume local LLMs are bad when it doesn’t work right.

  • Jonathan@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 days ago

    Cool! I’ll subscribe. I’ve got about a dozen projects I’d like to build with Ollama, if I’ll get the motivation and free time who knows?

    • catty@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      5 days ago

      Start now! Install it, get a python environment up and running if you haven’t already, and get that first play-around project working which you work outwards from!

      • Jonathan@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        5 days ago

        Already setup! I think the first thing I want to do is setup retrieval augmented generation. Several of my hobby ideas will require it I think. Started trying to read up on it a couple days ago and I had a serious lack of focus going on.

        I’ve been kind of hoping to come across a super simple way to implement it, but haven’t exactly looked much yet.