I'd guess that the majority of ML software is written in PyTorch, not in CUDA, and PyTorch has support for multiple backends including AMD. torch.compile also supports AMD (generating Triton kernels, same as it does for NVIDIA), so for most people there's no need to go lower level.
Sure, but if the OctaneRender folk wanted to support AMD, then I highly doubt they'd be interested in a CUDA compatability layer either - they'd want to be using the lowest level API possible (Vulkan?) to get close to the metal and optimize performance.
I said that if they wanted to support AMD they would use the closest-to-metal API possible, and your links prove that this is exactly their mindset - preferring a lower level more performant API to a higher level more portable one.
For many people the tradeoffs are different and ability to write code quickly and iterate on design makes more sense.