If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

In short, the answer is "It IS the best way, but a little longer than you thought."

Despite of the pricing, the most outstanding problem is LLVM doesn't have a backend for KnightCorner. SSE instructions also won't work on KNC.
We also need a (virtual?) DRM driver to handle all the mess such memory management and interactions with i915. You won't want a pure rendering card.
Finally, may Intel push KNC kernel driver into Linux mainline, or we need to stick with RHEL6/CentOS6/SL6/Suse as well as their ancient software.

Nevertheless, it is still the brightest way towards high performance rendering with OSS.

Thanks for the reply.Some version of KC will probably trickle down to the consumer market eventually. The times of mixing and matching different vendors parts are close to an end, so if intel wants to be relevant at the graphically intensive applications they must develop a high performance GPU. I remember reading rumors that future integraded graphics might even be based on it. It will probably take some time though...

What I find most interesting is that, if I understood it correctly, since KC is "easier" to program, developing a driver for it would probably be "easier" than for other GPUs. This would have the consequence of reducing the amount of effort a complete GPU driver requires, improving our chances of having open drivers. If such a trend catches with other GPU manufacturers, should be great for consumers, right?

Thanks for the reply.Some version of KC will probably trickle down to the consumer market eventually. The times of mixing and matching different vendors parts are close to an end, so if intel wants to be relevant at the graphically intensive applications they must develop a high performance GPU. I remember reading rumors that future integraded graphics might even be based on it. It will probably take some time though...

What I find most interesting is that, if I understood it correctly, since KC is "easier" to program, developing a driver for it would probably be "easier" than for other GPUs. This would have the consequence of reducing the amount of effort a complete GPU driver requires, improving our chances of having open drivers. If such a trend catches with other GPU manufacturers, should be great for consumers, right?

(I only skimmed part of KC's ISA. Sorry for any mistake)
OpenGL developers don't need to write any "driver", but an "OpenGL server". That's what developers were doing in the "good old days":P, when SGI dorminated workstations.
However, unfortunatelly the efforts to implement one are still a lot, but this time people don't need to working on two ISAs for each card.

For the open-source GPU driver, I'm still not optimistic. It would take long time before community have real threatens to NV/AMD's solutions. For example, the state tracker of Mesa can only handle OpenGL 3.1, rather than 4.x.

For the open-source GPU driver, I'm still not optimistic. It would take long time before community have real threatens to NV/AMD's solutions. For example, the state tracker of Mesa can only handle OpenGL 3.1, rather than 4.x.

I guess what I was expecting is that if the GPU handles a code general enought, the same driver would work for every GPU this general. Bear with me for a moment.

Obviously i'm going out on a limb here, but let's imagine Xeon Phi crushes the competition in the HPC space and enters consumer market (probably being integrated as the GPU of some future intel SoC).

AMD and nVidia will be pressed to put out more "general" GPUs. Probably accepting ARM instruction set. Maybe they even decide to push this "programabilty" into future OpenGL revisions.

In this scenario, if a driver similar to LLVM-pipe runs on every GPU out there, the efforts of building and maitaining a driver would be much less then they are now. Right? This could potentially be a huge win for opensource, a single driver to rule them all. Much like Linux itself. One can only dream...

I'm sorry if I made any mistake with the terms or concepts, I'm not a programmer.

I guess what I was expecting is that if the GPU handles a code general enought, the same driver would work for every GPU this general. Bear with me for a moment.

Obviously i'm going out on a limb here, but let's imagine Xeon Phi crushes the competition in the HPC space and enters consumer market (probably being integrated as the GPU of some future intel SoC).

AMD and nVidia will be pressed to put out more "general" GPUs. Probably accepting ARM instruction set. Maybe they even decide to push this "programabilty" into future OpenGL revisions.

In this scenario, if a driver similar to LLVM-pipe runs on every GPU out there, the efforts of building and maitaining a driver would be much less then they are now. Right? This could potentially be a huge win for opensource, a single driver to rule them all. Much like Linux itself. One can only dream...

I'm sorry if I made any mistake with the terms or concepts, I'm not a programmer.

It's possible, but firstly AMD&NV have to make agreement on the ISA. At least, they need to make agreement on some features of their processors.
Like one can't develop OS easily for two CPUs, one with MMU&Interruption and one without, currently we can't develop such a general driver for all GPUs as they have too many differences. For example, on the context switch, I/A/N choose three different ways.

BTW, maybe the most funny thing is, if a GPU is "general enough", why we have to call it GPU? Just because it can output videos?

I guess the two big challenges will be texture processing (Larrabee had dedicated texture units) and scaling to a larger number of threads.

My recollection was that recent llvmpipe versions scaled pretty well to 3 cores but hit diminishing returns after that (see Michael's test below but ignore 12-thread because there you're running "hyper-threads" instead of more cores :

I think the scaling issue should be manageable (GPUs manage it today with the equivalent of 20+ cores) -- I'm less sure about texturing simply because there's a lot of processing power hidden in the texture filtering.

Originally Posted by zxy_thf

BTW, maybe the most funny thing is, if a GPU is "general enough", why we have to call it GPU? Just because it can output videos?

Have you no faith in Marketing ? GPU will just become "General-purpose Processing Unit"

Due to my ignorance in the subject I couldn't grasp from AMD's roadmap if such "programability" is also also expected in AMD camp. Obviously you can only share what's been made public already, but if can be so kind as to briefly clarify how the HSA improvements differs from Xeon+XeonPhi chip I'm sure us layman users would greatly appreciate.

It's possible, but firstly AMD&NV have to make agreement on the ISA. At least, they need to make agreement on some features of their processors.
Like one can't develop OS easily for two CPUs, one with MMU&Interruption and one without, currently we can't develop such a general driver for all GPUs as they have too many differences. For example, on the context switch, I/A/N choose three different ways.

I think the keyword here is LLVM. Yes you can easily develop OS for two CPUs utilizing LLVM. Of course if one CPU lacks some feature like an MMU, the OS must be able to cope with the lack of such a component. However you are talking about a quite small difference here that LLVM should have no problem in handling.

Code that targets LLVM do not target a specific ISA...

Originally Posted by zxy_thf

BTW, maybe the most funny thing is, if a GPU is "general enough", why we have to call it GPU? Just because it can output videos?

Why are we still calling something a "sound card" when its most often part of the chipset? Because of historical reasons. We already have OpenCL and other standards that makes a GPU a lot more than a GPU.

(I only skimmed part of KC's ISA. Sorry for any mistake)
OpenGL developers don't need to write any "driver", but an "OpenGL server". That's what developers were doing in the "good old days":P, when SGI dorminated workstations.
However, unfortunately the efforts to implement one are still a lot, but this time people don't need to working on two ISAs for each card.

For the open-source GPU driver, I'm still not optimistic. It would take long time before community have real threatens to NV/AMD's solutions. For example, the state tracker of Mesa can only handle OpenGL 3.1, rather than 4.x.

Actually they would need to write a Mesa driver, as Mesa already have an OpenGL Server.

What do you mean? The mesa drivers for NV/AMD are at 3.1 to. The Xeon Phi with a good Mesa driver have a fair chance to give us performance that's neither NV or AMD can currently match.

Yes I know that the proprietary driver have more features and performance, but that's totally irrelevant. For a bunch of reasons i need FOSS drivers and have to judge a device based on how it perform with FOSS drivers. And I know I'm not alone with such use cases.