• TropicalDingdong@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    3
    ·
    7 months ago

    yep yep and yep.

    and they’ve been eating their lunch so long at this point I’ve given up on that changing.

    The new world stands in cuda and that’s just the way it is. I don’t really want an nVidia, radeon seems far better for price to performance . Except I can justify an nVidia for work.

    I can’t justify a radeon for work.

    • cbarrick@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      7 months ago

      Long term, I expect Vulkan to be the replacement to CUDA. ROCm isn’t going anywhere…

      We just need fundamental Vulkan libraries to be developed that can replace the CUDA equivalents.

      • cuFFT -> vkFFT (this definitely exists)
      • cuBLAS -> vkBLAS (is anyone working on this?)
      • cuDNN -> vkDNN (this definitely doesn’t exist)

      At that point, adding Vulkan support to XLA (Jax and TensorFlow) or ATen (PyTorch) wouldn’t be that difficult.

      • DarkenLM@kbin.social
        link
        fedilink
        arrow-up
        18
        ·
        7 months ago

        wouldn’t be that difficult.

        The amount of times I said that only to be quickly proven wrong by the fundamental forces of existence is the reason that’s going to be written on my tombstone.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        2
        ·
        7 months ago

        I think. it’s just path stickiness at this point. CUDA works and then you can ignore it’s existence and do the thing you actually care about. ML in the pre CUDA days was painful. CUDA makes it not painful. Asking people to return to painfully…

        Good luck…