I have this question. I see people, with some frequency, sugar coating the Nvidia GPU marriage with Linux. I get that if you already have a Nvidia GPU or you need CUDA or work with AI and want to use Linux that is possible. Nevertheless, this still a very questionable relationship.

Shouldn’t we be raising awareness about in case one plan to game titles that uses DX12? I mean 15% to 30% performance loss using Nvidia compared to Windows, over 5% to 15% and some times same performance or better using AMD isn’t something to be alerting others?

I know we wanna get more people on Linux, and NVIDIA’s getting better, but don’t we need some real talk about this? Or is there some secret plan to scare people away from Linux that I missed?

Am I misinformed? Is there some strong reason to buy a Nvidia GPU if your focus is gaming in Linux?

  • data1701d (He/Him)@startrek.website
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    7 hours ago

    I feel like most people who use Nvidia on Linux just got their machine before they were Linux users, with a small subset for ML stuff.

    Honestly, I hear ROCm may finally be getting less horrible, is getting wider distro support, and supports more GPUs than it used to, so I really hope AMD will become as livable ML dev platform as it is a desktop GPU.

    • utopiah@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      4 hours ago

      Yep, that’d be me. That said if I were to buy a new GPU today (well, tomorrow, waiting on Valve announcement for its next HMD) I might still get an NVIDIA because even though I’m convinced 99% of LLM/GenAI is pure hype, if 1% might be useful, might be built ethically and might run on my hardware, I’d be annoyed if it wouldn’t because ROCm is just a tech demo but is too far performance wise. That’d say the percentage is so ridiculously low I’d probably pick the card which treats the open ecosystem best.

      • MalReynolds@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 minutes ago

        ROCm works just fine on consumer cards for inferencing and is competetive or superior in $/Token/s and beats NVIDIA power consumption. ROCm 7.0 seems to be giving >2x uplift on consumer cards over 6.9, so that’s lovely. Haven’t tried 7 myself yet, waiting for the dust to settle, but I have no issues with image gen, text gen, image tagging, video scanning etc using containers and distroboxes on Bazzite with a 7800XT.

        Bleeding edge and research tends to be CUDA, but mainstream use cases are getting ported reasonably quickly. TLDR unless you’re training or researching (unlikely on consumer cards) AMD is fine and performant, plus you get stable linux and great gaming.

      • FauxLiving@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        4 hours ago

        I use local ai for speech/object recognition from my video security system and control over my HomeAssistant and media services. These services are isolated from the Internet for security reasons, that wouldn’t be possible if they required OpenAI to function.

        ChatGPT and Sora are just tech toys, but neural networks and machine learning are incredibly useful components. You would be well served by staying current on the technology as it develops.

      • data1701d (He/Him)@startrek.website
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        From what I’ve heard, ROCm may be finally getting out of its infancy; at the very least, I think by the time we get something useful, local, and ethical, it will be pretty well-developed.

        Honestly, though, I’m in the same boat as you and actively try to avoid most AI stuff on my laptop. The only “AI” thing I use is I occasionally do an image upscale. I find it kind of useless on photos, but it’s sometimes helpful when doing vector traces on bitmap graphics with flat colors; Inkscape’s results aren’t always good with lower resolution images, so putting that specific kind of graphic through “cartoon mode” upscales sometimes improves results dramatically for me.

        Of course, I don’t have GPU ML acceleration, so it just runs on the CPU; it’s a bit slow, but still less than 10 minutes.

      • warmaster@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        4 hours ago

        I did this.

        From:

        Intel i7 14700K + 3080 TI

        To:

        Ryzen 7700X + RX 7900 XTX.

        The difference on Wayland is very big.