• Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    5 months ago

    Pretty easy to roll your own with Kobold.cpp and various open model weights found on HuggingFace.

    • TipRing@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 months ago

      Also for an interface, I’d recommend KoboldLite for writing or assistant and SillyTavern for chat/RP.

    • DarkThoughts@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      I tried oobabooga and it basically always crashes when I try to generate anything, no matter what model I try. But honestly, as far as I can tell all the good models require absurd amounts of vram, much more than consumer cards have, so you’d need at least like a small gpu server farm to local host them reliably yourself. Unless of course you want like practically nonexistent context sizes.

      • exu@feditown.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 months ago

        You’ll want to use a quantised model on your GPU. You could also use the CPU and offload some parts to the GPU with llama.cpp (an option in oobabooga). Llama.cpp models are in the GGUF format.