Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think GPGPU is considered difficult because people aren't taught how to do it, and there's very little software support for it outside of esolangs, research projects, and single-person GitHub crusades, or vendor specific stuff like CUDA.

There's other stuff that's difficult too, like farming out a compute workload to a bunch of unreliable servers through unreliable networks, but there's just so much tooling and expertise going around for it, that people do it regularly.



If CUDA feels bad, there's this cross platform API called OpenCL. It's even possible to generate OpenCL from C++ without directly writing kernels with Boost.Compute, and I wouldn't call C++ an esolang. And if you're fine with nVidia, there's stuff like Thrust and cuBLAS. It's true that it's not taught, but again optimization isn't interesting for computer scientists, software engineers are taught it's evil and physicists are supposed to just read a book and get to work.

I think distributed computing is OK because it again enables things that would be impossible with a single computer, no matter how huge.


Great printf debugging experience with C, and playing compiler with source code on the fly, meanwhile on CUDA side, graphical debugging with shader code and support for standard C++17, shipping bytecode.


It took a while to figure out how the comment was related to the discussion. But I never said that CUDA would actually be difficult, just that I've never met a manager who would not turn down a programmer's suggestion to do calculation on GPU.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: