If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

The problem is that it will be like that forever. AMD/NVIDIA deploy over five hundred programmers to create fast/optimized/bug free drivers for their GPUs, while at most there is just a handful of programmers (I'd say 10 tops) working on open source drivers.

We don't need to "defeat" the blobs, since they have a number of inherent faults that make them undesirable and which can not be fixed.

What we need is to bring performance to the point where it does not matter any more, so all the inherent advantages of OSS drivers: KMS, out-of-the-box support, integration, support for X technologies, lack of spyware and malware, longer support, etc. take over.

With r300g, we're already there. With r600g, it will take a while longer, but we'll get to 65-75% of the performance, and that's fast enough for most people. There are always people who will fuck around with blobs, dicking them into a kernel that was not designed for them, to obtain 30% more FPS, but I imagine that for most users, this perversion will become unnecessary, just like nobody is installing nforce ethernet binary driver nowadays.

It's like GCC vs. ICC. ICC is much faster, but everyone uses GCC anyway, because speed is not everything. We need to cover the needs of the computer users, free software itself is THE killer argument.

It's like GCC vs. ICC. ICC is much faster, but everyone uses GCC anyway, because speed is not everything. We need to cover the needs of the computer users, free software itself is THE killer argument.

The difference being is that the gap in performance for a vast majority of compiled items is usually about 5-10% if there is a difference at all when it comes to icc vs gcc. Gcc can also be faster on some items again not by much. Also the resulting binary winds up with the same features no matter what compiles it. Comparing compilers to drivers isn't a good comparison.

The difference being is that the gap in performance for a vast majority of compiled items is usually about 5-10% if there is a difference at all when it comes to icc vs gcc. Gcc can also be faster on some items again not by much. Also the resulting binary winds up with the same features no matter what compiles it. Comparing compilers to drivers isn't a good comparison.

Well, if the performance difference is 20% (if there is one at all), like is the case with r300g, then it is indeed a good comparison.

Sure, the r600+ drivers have some catching up to do, but they did start far too late, and they did catch up a lot already. If the drivers reach the 75% mark (as they are expected to and like r300g did), then it will indeed be a good comparison.

I also don't see why comparing compilers to drivers is not a good comparison, when a large part of what a modern GPU driver does is actually compiling. In fact, they translate OpenGL code and shaders into a form that the GPU (a processor) can execute.

Yes it is. As Jérôme Glisse said, there is several points :
- the amd gpu conception (contrary to nvidia) need that the kernel take time to analyze the command buffer, for security reason. fglrx do not do that.
- there is some limitation is the API, and it's not easy to fix. Moreover the kernel api is freeze, contrary to nouveau.
- r600g have a design that is not the best to do what it do.

To conclude, in fact, the main "problem" with r600g is in the kernel, not really in gallium side.

Well, if the performance difference is 20% (if there is one at all), like is the case with r300g, then it is indeed a good comparison.

Only if the binary that is produced is time critical. Let's face it an end user is rarely going to care if it takes 54 seconds to encode a mp3 vs 60 seconds and the output would be the same but when you are dealing with real time input / output that is an entirely different matter.

If 20% more performance costs $10, then it is as good as unimportant. Especially when dealing with 10-year-old games, which run faster than the refresh rate anyway, which is essentially the Linux situation.

Yes it is. As Jérôme Glisse said, there is several points :
- the amd gpu conception (contrary to nvidia) need that the kernel take time to analyze the command buffer, for security reason. fglrx do not do that.
- there is some limitation is the API, and it's not easy to fix. Moreover the kernel api is freeze, contrary to nouveau.
- r600g have a design that is not the best to do what it do.

To conclude, in fact, the main "problem" with r600g is in the kernel, not really in gallium side.

Kernel side is one part of the issue if you want to compete with catalyst. Right now the biggest issue is in r600g itself. Thought given the number of r600g needs i fear that kernel side might also impact it a little bit more than r300g.