If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Gdev: A Competitive Open-Source CUDA Implementation

03-29-2012, 10:10 PM

Phoronix: Gdev: A Competitive Open-Source CUDA Implementation

Shinpei Kato, the developer that last year at XDC2011 Chicago presented TimeGraph as an open-source GPU Linux command scheduler and PathScale's GPGPU run-time, has something new to share. Shinpei's latest project is Gdev, which comes down to being an open-source CUDA implementation that's competitive to NVIDIA's proprietary stack...

Sounds awesome, but I'm sure Linus will get a few laughs from the proposal to run CUDA kernels... in the Linux kernel. It'll go something like "hahahahahahahahahahahaha NO". If we can't even do floating point or C++, how can we do something this "heavy" in the kernel?

Also: don't confuse this project with "gudev" -- I almost did at first

Comment

Kidding. I don't think going with vendor dependent things in the situation CUDA is in, is a very good idea.
(And in this case, I have some technical grounded reservations.)

I'm not a huge fan of CUDA, but if I remember correctly, the nvcc that Nvidia open-sourced a while back was LLVM-based. If that's the case, a CUDA state tracker that can output LLVM IR might be feasible without too much effort. And once we've got LLVM IR, we might also be able to run CUDA programs on CPU/R600/Nouveau/other (assuming the user is willing to recompile their programs targetting that architecture). OpenCL is still better for us in the long run, but it would be useful to be able to execute CUDA code via the OSS drivers, even if just to ease porting to OSS CL implementations.

Comment

I'm not a huge fan of CUDA, but if I remember correctly, the nvcc that Nvidia open-sourced a while back was LLVM-based. If that's the case, a CUDA state tracker that can output LLVM IR might be feasible without too much effort. And once we've got LLVM IR, we might also be able to run CUDA programs on CPU/R600/Nouveau/other (assuming the user is willing to recompile their programs targetting that architecture). OpenCL is still better for us in the long run, but it would be useful to be able to execute CUDA code via the OSS drivers, even if just to ease porting to OSS CL implementations.

By doing so you are encouraging the spread of a proprietary specification; a specification controlled by a unique vendor and shaped according to their needs. Given we already have the open standard OpenCL, which roughly speaking is a superset of CUDA, I don't think we should promote closed alternatives.

Comment

Don't forget that this is OS RESEARCH! Architecture is the first step ("Embrace..."). Next up -> Gallium when it's ready, so it'll 'run' on non-nVidia hardware ("... extend..."). After that it's killing CUDA ("knife the baby") and "... Extinguish!".

This is the tried and proven Microsoft way to operating system succes. Who's gonna disprove me on that? ;D