I’m curious about what the consensus is here for which models are used for general purpose stuff (coding assist, general experimentation, etc)

What do you consider the “best” model under ~30B parameters?

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 days ago

    I use Qwen coder 30B, testing Venice 24B, also going to play with Qwen embedding 8B and Qwen resorter(?) 8B. All with Q4.

    They all run pretty well on the new MacBook I got for work. My Linux home desktop has far more modest capabilities, and I generally run 7B models, though gpt-oss-20B-Q4 runs decently. It’s okay for a local model.

    None of them really blow me away, though Cline running in VSCode with Qwen 30B is okay for certain tasks. Asking it to strip all of the irrelevant html out of a table to format as markdown or asciidoc had it thinking for about 10 minutes before asking specifically which one I wanted - my fault, I should’ve picked one. Wanted markdown but thought adoc would reproduce it with better fidelity (table had embedded code blocks) and so I left it open to interpretation.

    By comparison, ChatGPT ingested it and popped an answer back out in seconds that was wrong. So Idk, nothing ventured, nothing gained. Emphasis on the latter.