If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Has anyone heard of any movement towards glamour with nouveau?
I can't think of any reason why this wouldn't have at least been attempted but I've been unable to find any discussion about it (thnx goog).

Comment

Has anyone heard of any movement towards glamour with nouveau?
I can't think of any reason why this wouldn't have at least been attempted but I've been unable to find any discussion about it (thnx goog).

Maybe it's viable only after reclocking has been done?

Comment

Has anyone heard of any movement towards glamour with nouveau?
I can't think of any reason why this wouldn't have at least been attempted but I've been unable to find any discussion about it (thnx goog).

And why exactly are we supposed to cripple our perfectly good (ok, maybe not perfectly) 2D driver ? Going via OpenGL would add considerable overhead and deprive us of the opportunity to use the 2D engine where it's appropriate / helpful.
Plus, it's extra work with no significant (if any) gain, and we don't exactly have a lot of extra time at our disposal.

And we wouldn't want to have to finish GL support for a new chipset before anyone can use X. The 2D driver is much much simpler and thus faster to write.

Comment

Reclocking is painfully hard to do, you would also have to do it for almost every card (or at least every GPU: there are 4 Kepler GPUs I know for example: GK107, GK106, GK104, GK110), so we won't see it for a while, sadly. We can always hope though.

Luckily the memory type (xDDRy) doesn't change that often, and the interfaces to it tend to only change with each new card generation (Fermi, Kepler, ...).
The trouble is, the register values that the blob does write *depend* on the specific card you have (which registers set which frequencies to which values, how to extract memory timing information from the VBIOS, where to put it, how to even determine which memory type you have, etc.). I haven't worked on it myself but it looks like memory reclocking is the most difficult to get right. You can't just copy and paste from the binary driver. That will, at best, work on the very card you extracted the values from.

We also want performance level to be selected dynamically based on load/temperature/power consumption, all that is being worked on. And we can't turn it on for users before it really works because there's always the danger of exposing your card to unhealthy levels of heat (or worse). But don't worry, I haven't heard of any dev's cards getting fried yet, even when experimenting with reclocking.

Comment

And why exactly are we supposed to cripple our perfectly good (ok, maybe not perfectly) 2D driver ? Going via OpenGL would add considerable overhead and deprive us of the opportunity to use the 2D engine where it's appropriate / helpful.
Plus, it's extra work with no significant (if any) gain, and we don't exactly have a lot of extra time at our disposal.

And we wouldn't want to have to finish GL support for a new chipset before anyone can use X. The 2D driver is much much simpler and thus faster to write.

I think the distinction here is presence of a 2D engine. If the GPU has a 2D engine that can handle EXA-style drawing functions then writing a traditional 2D driver first makes sense.

If the GPU uses the 3D engine for 2D, then you need to write "most of a 3D HW driver" in order to run even basic 2D operations, and using something like Glamor or XA makes more sense.

Comment

No 2D engine at all. We had a 2D engine in 5xx and earlier, but it didn't do blends etc.. so we used the 3D engine for EXA anyways.

Neither does NV's 2D engine. It can do solids (with ROP) and blits. But still, setting up the 3D engine for a single, known operation is much easier than dealing with all of OpenGL. The most significant advantage being that you don't need a shader compiler. And little things, like you also don't need vertex buffers because the 3D engine has immediate mode (which is quite sufficient or even preferable for drawing a single quad).

Comment

Then why did Nvidia's (blob) 2d performance take a nosedive after 7xxx? I recall at the time the official reason was that they no longer had a 2d engine, had to do 2d work on the 3d engine with 8xxx and onwards, and it took years to optimize it to the level of the 7xxx 2d engine.

Google finds a lot of confirmations for this, that Nvidia dropped the 2d engine starting with 8xxx?

Comment

Then why did Nvidia's (blob) 2d performance take a nosedive after 7xxx? I recall at the time the official reason was that they no longer had a 2d engine, had to do 2d work on the 3d engine with 8xxx and onwards, and it took years to optimize it to the level of the 7xxx 2d engine.

Google finds a lot of confirmations for this, that Nvidia dropped the 2d engine starting with 8xxx?