Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

CUDA is easy to use and the open standard has an extremely high barrier to entry. This is what enabled Nvidia to even lock people in in the first place - their tech was just so much better.


That "barrier to entry" line works for things that saturate broad markets... and that is definitely not the case with GPGPU. So when you try to use that line of thinking, given the incredibly well funded and hyper niche use cases it sees, it sounds as if you're saying that opencl is too hard for those dummies writing nuclear weapon simulators at Oak Ridge National Laboratory. And before anybody swings in on a chandelier with "but the scientists are science-ing, they can't be expected to learn to code - something something python!": take a look at the software and documentation those labs make publicly available - they are definitely expected to be comfortable with things weirder than opencl.


If you have a hard task to accomplish, and there's a way of making it substantially easier, the smart engineer is, frankly, going to take the easier option. They're only going to go with the hard one if that's the only hardware they have access to.

Back in university we had to specify to the IT department that we needed Nvidia GPUs because people had done all sorts of cool things with CUDA that we could build on, and if we'd had to write it on AMD GPUs back in 2013 we would have burnt through all of our time just getting the frameworks compiling.


Yep.

Source: burnt through all my time just getting frameworks compiling on AMD GPUs back in 2014.


Maybe they can work with complex libraries, but if there is a better one available I would totally understand that they prefer it. You need to be in a certain software bubble not to understand how to work with OpenCL, but to care enough about whether something is an open standard, whether something is open source etc.


> You need to be in a certain software bubble...

Or have a fundamental understanding of the way mainframes have been built since forever... massive job schedulers feeding various ASICs purpose built for different tasks. IBM is really something else when it comes to this sort of thing, the left hand really doesn't know what the right hand is doing over there. Summit at ORNL... a homogeneous supercomputer cluster made of power9 machines effectively acting as pci backplanes for GPGPUs. You'd think they'd know better... the choice for the ISA made sense given the absolute gulf between them and x86 I/O at the time, but to then not take full advantage of their CPUs by going with CUDA... wow. Oh well, this is the same company that fully opened their CPU - and then immediately announce how their next CPU was going to depend on binary blobbed memory controllers... aaand they also sold the IP for the controllers to some scumbag softcore silicon IP outfit. So despite all their talk with regard to open source, no - they don't seem to actually understand how to fully take advantage of it.


Mainframes died for a reason, and that reason wasn't the capabilities of their specialized hardware blocks, but

a. the difficulty of using them in unintended ways, such as wiring together a bunch of them to get better performance

b. the specialized knowledge required to get them to do anything at all

c. the limited use cases of the hardware (see a.) which made it expensive

Applauding OpenCL because it is more like mainframes than CUDA... seems nonsensical to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: