Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd guess that the majority of ML software is written in PyTorch, not in CUDA, and PyTorch has support for multiple backends including AMD. torch.compile also supports AMD (generating Triton kernels, same as it does for NVIDIA), so for most people there's no need to go lower level.


GPUs are used for more than only ML workloads.

CUDA relevance in the industry is so big now, that NVidia has several WG21 seats, and helps driving heterogenous programming roadmap for C++.


You can use PyTorch for more than ML. No need to use backprop. Thinks of it as GPU accelerated NumPy.


I would like to see OctaneRender done in Pytorch. /s


Sure, but if the OctaneRender folk wanted to support AMD, then I highly doubt they'd be interested in a CUDA compatability layer either - they'd want to be using the lowest level API possible (Vulkan?) to get close to the metal and optimize performance.


See, that is where you got all wrong, they dropped Vulkan for CUDA, and even made a talk about it at GTC.

https://www.cgchannel.com/2023/11/otoy-releases-first-public...

https://www.cgchannel.com/2023/11/otoy-unveils-the-octane-20...

And then again, there are plenty of other cases where Pytorch makes absolute no sense in GPU, which was the whole starting point.


> See, that is where you got all wrong

I said that if they wanted to support AMD they would use the closest-to-metal API possible, and your links prove that this is exactly their mindset - preferring a lower level more performant API to a higher level more portable one.

For many people the tradeoffs are different and ability to write code quickly and iterate on design makes more sense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: