If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

just because they return a 15% slower performance than the blob when underclocked does not (and will not) mean that these numbers will scale accordingly when reclocking support is finally (if ever) implemented.

The benchmark has one card that does clock properly. And, unfortunately, it does suggest that it doesn't scale well.

Perhaps I missed something but why doesn't Michael run the Nouveau drivers at their highest speeds when possible?

As stated in the article's introduction, Michael does run the Nouveau drivers at the highest speeds when possible, ie. on the 9800 GTX. It doesn't work on the other two cards.

Originally Posted by liam

I've a Quadro FX 570M that has three clock levels. The highest level is core 475 shader 950 memory 700, while stock level is 275, 550, 300, respectively. A rather massive difference.
If used this for gaming I'd always put it at the highest speed levels. I'd imagine most gamers would do the same.

I'd imagine most gamers would not run their cards at the highest speed levels if it crashes immediately, or is unstable, or produces rendering errors.

I note that your Quadro FX 570M has an older NV84 (G84) chipset, while the 9800 GT and 9800 GTX have the NV92 (G92) chipset. The GT 220 has the NVA5 (GT216) chipset. If you have been successful in reclocking the Quadro FX 570M please let us know which kernel version you are using as this would be good news and might also be useful to someone else.

If Michael had tested with a GT 240 (I can't remember if he owns one), which has the NVA3 (GT215) chipset, things might have been different, as better NVA3 memory re-clocking was introduced in the 3.5 kernel. I'm not sure how much difference that would make without also reclocking the core though.

Nouveau will have optimus support in distros *first* and by *default* real soon, and Nvidia won't.

Ha ha, take that Nvidia.

I wouldn't bet my ass on that. Nvidia is working hard for some month already to get optimus working in their blob.
Honestyl spoken,imo, optimus is useless with nouveau, cause the performance gain using the noveau driver+nvidia gfx insteadof the intel gfx is likely to low for thinking about gaming with it .

I wouldn't bet my ass on that. Nvidia is working hard for some month already to get optimus working in their blob.
Honestyl spoken,imo, optimus is useless with nouveau, cause the performance gain using the noveau driver+nvidia gfx insteadof the intel gfx is likely to low for thinking about gaming with it .

Good point that.

If memory serves, the whole point of Optimus was to have the figurative best of both worlds by wiring the display to the Intel GPU which is just good enough for everything that is not games or CAD related, and swap over to the dedicated chip when more crunching power is needed.

With the Nouveau driver not being close to delivery the expected performance from the Nvidia chip it makes little sense to even bother with switching if the performance gains do not match up with the increase in power consumption.

The steam engine was a revolution in engineering, but I don't think many use them anymore..

As far as I know, almost all power plants use steam engines... From coal to atomic energy. There are of course some exceptions, for instance the natural gas-driven otto engines which drive the dynamos directly.

As far as I know, almost all power plants use steam engines... From coal to atomic energy. There are of course some exceptions, for instance the natural gas-driven otto engines which drive the dynamos directly.

Nouveau developers are retarded, trying to reinvent the wheel, wasting valuable resource and time instead of working on other parts of Linux that needs improvement.

Time to take this garbage out and throw it into the dustbin of history where it belongs.

Well, if they are retarded, you are braindead.

You HAVE to have control over your hardware. It wasn't just once that people have found hairraising covert security holes in binary blobs, and nVidia has been mentioned more than once in that context.

While I understand some of the woes at open source effort, I have to acknowledge difficulty of working with so cpmplex stuff under conditions of cronic and critical lack of documentation all the time.

Just look at AMD's documentation ( since they claim publically support for open source and open documentation) and try to make head and tails out of that.

nVidia and AMD get plenty of cash from hardvare. Compared to that, Open-source developers effort lives almost from public mercy.

And still ( while I don't know aboot Nouveau ) open source radeon driver is not half bad. Eyefinity works for me superbly ( 3 x 1600x1200 ). 3D is not that bad. 2D is great. It works without a problem on all kernels I cared to try, even the freshest ones, compiled straight a few hours after release or git copies. ONly thing that is still missing for me on radeon is good GPGPU interface.

Not so with closed-source fglrx. It's full of bugs, some of them persist through many versions over multiple year timespan and quite annoying. It is not always easy ( or even possible) to nail right combination of kernel, driver, libraries and xorg server to make it work.

I switched recently from radeon to fglrx only to get choked on bugs and instabilities. For the moment I am persisting as I want to play with GPGPU stuff, but at very first day radeon includes support for it, I am switching back.