Proprietary SW interfaces and hardware vendors

David Kanter (dkanter@realworldtech.com) on 2/10/12 wrote:
---------------------------
>Anon (no@email.com) on 2/10/12 wrote:
>---------------------------
>>David Kanter (dkanter@realworldtech.com) on 2/10/12 wrote:
>>---------------------------
>>>Anon (no@thanks.com) on 2/9/12 wrote:
>>>---------------------------
>>>>David Kanter (dkanter@realworldtech.com) on 2/9/12 wrote:
>>>>---------------------------
>>>>>The bottom line is that I don't think Nvidia is in a position to push a proprietary
>>>>>standard and get it adopted by the broader industry. HPC folks are probably fine,
>>>>>but I have a very hard time seeing mainstream software developers voluntarily choosing CUDA over the alternatives.
>>>>>
>>>>>Now you can argue that perhaps they will choose CUDA + something else, but I'm skeptical.
>>>>
>>>>Don't know about AMP and OpenACC, but when discussing potential implementations
>>>>of an app that would benefit from massive parallelism, the developers with experience
>>>>that I talked to unanimously said to use CUDA unless you absolutely had to run on
>>>>non-NVidia HW. They went so far as to suggest developing/debugging in CUDA and then
>>>>porting to OpenCL if necessary, just because the developer >experience is so much better.
>>>
>>>Yes, and that's what I meant. It will change over time.
>>>
>>>Also, while developers might like CUDA (for good reasons), it doesn't matter if
>>>it cuts your potential revenue in 1/3rd.
>>>
>>>>Also, there does seem to be a limited opening of CUDA going on: http://pressroom.nvidia.com/easyir/customrel.do?easyirid=A0D622CE9F579F09&releasejsp=release_157&prid=831864
>>>>
>>>>Not sure how seriously to take it. Longer-term, I agree >that CUDA needs to become an open standard to survive.
>>>
>>>That announcement was meaningless marketing. Control of the CUDA standard is what
>>>matters, not that someone else can make a compiler.
>>
>>On earth why?
>
>Because Nvidia could always choose to redefine CUDA to favor their next-gen GPU
>and disadvantage Intel, AMD and anyone else. And everyone knows that's exactly
>what Nvidia would do if someone else started using CUDA.

And break all the pre-existing CUDA Code? you are really stretching here, you know.
If other implementations were existant, functional, and used, NVidia would only damage themselves by doing this.
Of course until there are such other implementations, it is different.

>
>You saw this with DX as well, both AMD and Nvidia probably wrote code that was
>optimal for their GPUs, but ran poorly on the competition. That was partially due
>to the very different underlying architectures, a factor which probably hit AMD's VLIW4/5 harder than Nvidia's designs.

I am unable to decode what you are claiming here, it makes no sense to me at all, especially considering what you write next..

>
>>That seems like a completely closed minded view.
>>how 'open' is DirectX, for example?
>
>You are missing the point. DirectX is closed but controlled by an agnostic company.
>MS really just wants DX to work on all their partner's hardware. They gain little
>benefit from favoring one partner over another, although they might have a lead partner in a given generation.

In which case you need to look at the history of DirectX a little more closely, at nearly every generation there has been strong evidence of MS favouring one particular 'camp' (although I hate the term myself..) - MS have well and truely shown that they are happy to use DX as a tool to push things the way they want.

>On the other hand, having the GPU programming interface controlled by a single
>vendor that makes GPUs is a recipe for disaster. There's a reason why Glide was
>never adopted by anyone else...and those reasons are just as pertinent today to
>GPGPU as they were 15 years ago to graphics.

Because Glide was both rubbish and extremely hardware specific? two things that CUDA is not?

Personally I think it doesnt matter, I notice you carefully avoid the content I wrote about dual codepaths, and the fact that CUDA easily outperforms OpenCL due to basic limitations of OpenCL.. as long as that stays true, both will continue to be used/supported, and hardware targets using the OpenCL path will tend to suffer a performance disadvantage.
Of course OpenCL could also be fixed, but there is little sign of that at present, lets hope...

There is an interesting parallel in Cg/GLSL/HLSL.
It may surprise you to know that Cg is still heavily used, even though it started life as an NVidia solution. It is still THE standard for DCC, and HLSL ended up mirroring it quite closely.
GLSL, the 'open' standard, even in OpenGL, has much less traction.

I am sure there are a lot of people who would love to see Intel and AMD have CUDA Support, and NVidia could well be one of them..