Team Red says that researchers and developers working on environments such as PyTorch, ONNX Runtime, or TensorFlow can now leverage the newest ROCm 6.1.3 on Linux, allowing them to tap the performance of AMD’s Radeon RX 7000 series GPUs or the workstation Radeon W7000 series GPUs, for their respective use case.
So will this bring CUDA-like performance to Radeon cards for genAI? 'Cause I’ve rocked nVidia GPUs for over 10 years now and I’m game (🥁) for something cheaper and more power efficient (as long as it has ≥24GB of VRAM).
Wouldn’t that also depend on support from downstream software/frameworks?
At the beginning of the year I was experimenting with Stable Defusion and I got the impression that support for AMD/Intel GPUs was lacking (I have a Nvidia GPU).
And I doubt AMD is going to offer any type great of deal; pricing is definitely going based on Nvidia but with a discount. It wouldn’t make sense to do otherwise considering the relatively low levels of competition in this segment.