Abacus.ai:

We recently released Smaug-72B-v0.1 which has taken first place on the Open LLM Leaderboard by HuggingFace. It is the first open-source model to have an average score more than 80.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 year ago

    I’m afraid to even ask for the minimum specs on this thing, open source models have gotten so big lately

    • TheChurn@kbin.social
      link
      fedilink
      arrow-up
      17
      ·
      1 year ago

      Every billion parameters needs about 2 GB of VRAM - if using bfloat16 representation. 16 bits per parameter, 8 bits per byte -> 2 bytes per parameter.

      1 billion parameters ~ 2 Billion bytes ~ 2 GB.

      From the name, this model has 72 Billion parameters, so ~144 GB of VRAM

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        It’s been discovered that you can reduce the bits per parameter down to 4 or 5 and still get good results. Just saw a paper this morning describing a technique to get down to 2.5 bits per parameter, even, and apparently it 's fine. We’ll see if that works out in practice I guess

        • Corngood@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          I’m more experienced with graphics than ML, but wouldn’t that cause a significant increase in computation time, since those aren’t native types for arithmetic? Maybe that’s not a big problem?

          If you have a link for the paper I’d like to check it out.

    • girsaysdoom@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      I think I read somewhere that you’ll basically need 130 GB of RAM to load this model. You could probably get some used server hardware for less than $600 to run this.

      • ArchAengelus@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Unless you’re getting used datacenter grade hardware for next to free, I doubt this. You need 130 gb of VRAM on your GPUs

      • cm0002@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Oh if only it were so simple lmao, you need ~130GB of VRAM, aka the graphics card RAM. So you would need about 9 consumer grade 16GB graphics cards and you’ll probably need Nvidia because of fucking CUDA so we’re talking about thousands of dollars. Probably approaching 10k

        Ofc you can get cards with more VRAM per card, but not in the consumer segment so even more $$$$$$

        • kakes@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Afaik you can substitute VRAM with RAM at the cost of speed. Not exactly sure how that speed loss correlates to the sheer size of these models, though. I have to imagine it would run insanely slow on a CPU.

          • Infiltrated_ad8271@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            I tested it with a 16GB model and barely got 1 token per second. I don’t want to imagine what it would take if I used 16GB of swap instead, let alone 130GB.

  • Miss Brainfarts@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    That’s nice and all, but what are some FOSS models I can run on GPU with only 4GB?

    I’ve tried Deepseek Coder, and it’s pretty nice for what I use it for. Then there’s TinyLlama, which… well it’s fast, but I need to be veeeery exact in how I prompt it.

    • Fisch@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Unfortunately LLMs need a lot of VRAM. You could try using koboldcpp, it runs on the CPU but let’s you offload layers onto the GPU. That way you might be able to stay withing those 4gb even with larger models.

      Edit: I forgot to mention there’s a fork of koboldcpp with rocm for AMD cards, which is about twice as fast if I remember correctly. Only relevant if you have an AMD card tho.

      Edit 2: This is the model I use btw