well i dearly miss CUDA as i don’t get ZLUDA to work properly with Stable Diffusion and FSR is sill leagues behind DLSS… but yeah overall i am very happy
I had to update my laptop about two years ago and decided to go full AMD and it’s been awesome. I’ve been running Wayland as a daily driver the whole time and and I don’t even really notice it anymore.
If you’re someone who reallse uses CUDA and all their stuff and don’t care about Wayland. NVidia is the choice you have to make. Simple as that.
If you don’t care about those things or are willing to sacrifice time and tinker around with AMDs subpar alternatives, AMD is the way to go.
Because let’s face it. AMD didn’t care about machine learning stuff and they only now begin to dabble in it. They lost a huge amount of people who work with those things as their day job. They can’t tell their bosses and/or clients that they can’t work for a week or two until they figured out how to get this alternative running that is just starting to care about that field of work.
Luckily the only way I’m gonna use ML is on my workstation server, which will have it’s Quadro M2000 replaced/complemented by my GTX 1070 once I have an AMD GPU in my main PC, because on that I mainly care about running games in 4k, with high settings but without much Raytracing, on Wayland.
I don’t believe Nvidia were the one’s being lazy in this regard, they submitted the merge request for explicit sync quite some time ago now. Wayland devs essentially took their sweet time merging the code.
Too little too late.
Already sold my 3070 and went for an 7900 XT bcs i got fed up with NVidia being lazy
Good. This is the better overall solution
well i dearly miss CUDA as i don’t get ZLUDA to work properly with Stable Diffusion and FSR is sill leagues behind DLSS… but yeah overall i am very happy
You can run Ollama with AMD acceleration
I’m aware I wanted to point out that AMD isn’t totally useless in AI.
Oh it definetly isn’t
Everything I need does run and I finally don’t run out of vram so easily 😅
I had to update my laptop about two years ago and decided to go full AMD and it’s been awesome. I’ve been running Wayland as a daily driver the whole time and and I don’t even really notice it anymore.
Even now, choosing between a free 4090 or a free 7900 XTX would be easy.
It totally depends on your usecase.
NVidia runs 100% rocksolid on X11.
If you’re someone who reallse uses CUDA and all their stuff and don’t care about Wayland. NVidia is the choice you have to make. Simple as that.
If you don’t care about those things or are willing to sacrifice time and tinker around with AMDs subpar alternatives, AMD is the way to go.
Because let’s face it. AMD didn’t care about machine learning stuff and they only now begin to dabble in it. They lost a huge amount of people who work with those things as their day job. They can’t tell their bosses and/or clients that they can’t work for a week or two until they figured out how to get this alternative running that is just starting to care about that field of work.
Luckily the only way I’m gonna use ML is on my workstation server, which will have it’s Quadro M2000 replaced/complemented by my GTX 1070 once I have an AMD GPU in my main PC, because on that I mainly care about running games in 4k, with high settings but without much Raytracing, on Wayland.
These hypothetical people should use Google Colab or similar services for ML/AI, since it’s far cheaper than owning a 4090 or an a100.
These absolutely not hypothetical people should absolutely NOT be using Google Colab.
Keep your data to yourself, don’t run shit in the cloud that can be run offline.
Exactly what data are you worried about giving to Colab?
If the day comes I want to upgrade my 3080 I’ll switch to an AMD solution but until then I’ll take any improvement I can get from Nvidia.
I don’t believe Nvidia were the one’s being lazy in this regard, they submitted the merge request for explicit sync quite some time ago now. Wayland devs essentially took their sweet time merging the code.