Not sure if this is the right place, if not please let me know.
GPU prices in the US have been a horrific bloodbath with the scalpers recently. So for this discussion, let’s keep it to MSRP and the lucky people who actually managed to afford those insane MSRPs + managed to actually find the GPU they wanted.
Which GPU are you using to run what LLMs? How is the performance of the LLMs you have selected? On an average, what size of LLMs are you able to run smoothly on your GPU (7B, 14B, 20-24B etc).
What GPU do you recommend for decent amount of VRAM vs price (MSRP)? If you’re using the TOTL RX 7900XTX/4090/5090 with 24+ GB of RAM, comment below with some performance estimations too.
My use-case: code assistants for Terraform + general shell and YAML, plain chat, some image generation. And to be able to still pay rent after spending all my savings on a GPU with a pathetic amount of VRAM (LOOKING AT BOTH OF YOU, BUT ESPECIALLY YOU NVIDIA YOU JERK). I would prefer to have GPUs for under $600 if possible, but I want to also run models like Mistral small so I suppose I don’t have a choice but spend a huge sum of money.
Thanks
You can probably tell that I’m not very happy with the current PC consumer market but I decided to post in case we find any gems in the wild.
I’m currently looking for this as well. As far as my investigation went right now I’ll probably go for 2x AMD Instinct MI50. Each of them has equivalent to slightly higher performance than a P40, however usually only 16gb VRAM (If you’re super lucky you might get one with 32gb, those are usually not labeled as such though; probably binned MI60). With two of them you got 32gb VRAM and quite the performance for, right now, 200€ / card. Alternatively you should be able to run quantized models on a single card as well.
If you don’t mind running ROCm instead of CUDA this seems like a good bang for the buck. Alternatively you might look into AMDs new line of “AI” SoCs (for example Frameworks Desktop computer). They seem to be really good as well, and depending on your usecase might be more useful than an equally priced 4090.
Do you have 2 PCIE X16 slots on your motherboard (speaking in terms of electrical connections)?
They would run with 8x speed each. Should not be too much of a bottleneck though, I don’t expect the performance to suffer noticeably more than 5% from this. Annoying, but getting a CPU+Board with 32 lanes or more would throw off the price/performance ratio.
I have an alternative for you if your power bills are cheap: X99 motherboard + CPU combos from China