If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

I guess I don't understand the pushback -- people are effectively saying "if you need something like a full 3D driver in the ddx to get decent performance then we expect you to do it" and "OMG why are they looking at Glamour (a ddx built over a full 3D driver) ??"

I guess it's that glamor both tears and is slower in every benchmark published, on every hardware measured

Plus there's a belief in the air that "normal" 2d would already be running on hd7k.

Comment

I guess it's that glamor both tears and is slower in every benchmark published, on every hardware measured

Slower than what? For "2D", software rendering is almost always faster than hw acceleration (EXA, UXA, and even SNA in a lot of cases). RENDER semantics map poorly to hw. Glamor is roughly on par with EXA today and makes it a lot easier to improve support in the future.

Comment

Slower than what? For "2D", software rendering is almost always faster than hw acceleration (EXA, UXA, and even SNA in a lot of cases). RENDER semantics map poorly to hw. Glamor is roughly on par with EXA today and makes it a lot easier to improve support in the future.

You guys are talking about totally different things. agd5f is talking about software rendering (eg shadowfb) on SI which is what you get by default today, you're talking about Glamor on Intel hardware.

Going forward, remember Chris W's comment that (paraphrasing a bit) Glamor is a good fit for big-ass GPU hardware (eg SI). Also note that some people are looking at initial patches for Glamor vs years of work on other architectures and extrapolating that "it's gonna be that way forever".

Comment

Software rendering is on par with or faster in more cases then any of the acceleration architectures. By that logic we should all be using shadowfb.

There are some use cases where software rendering is currently slower (primarily the fancy html5 browser demos). But it would be really nice if radeon DDX allowed to use non-crippled software rendering for 2D graphics (so that it is at least as fast as fbdev) as an option, while still providing the rest of the features expected from a modern graphics card (hardware accelerated OpenGL for 3D games, Xv extension for tear-free video, rotation, multi-monitor support, ...). Right now Option "RenderAccel" has some performance issues.

Comment

There are some use cases where software rendering is currently slower (primarily the fancy html5 browser demos). But it would be really nice if radeon DDX allowed to use non-crippled software rendering for 2D graphics (so that it is at least as fast as fbdev) as an option, while still providing the rest of the features expected from a modern graphics card (hardware accelerated OpenGL for 3D games, Xv extension for tear-free video, rotation, multi-monitor support, ...). Right now Option "RenderAccel" has some performance issues.

As I've mentioned several times on all of these threads, for good performance you really need to be all software or all hardware rendering. Any time you mix the two, performance suffers. That's why disabling RENDER accel is slower than plain shadowfb; you can use the GPU for copies and fills, but for everything else, the buffer must be migrated between CPU and GPU domains when you want to switch who renders. The same is true of trying to mix shadowfb and hw 3D rendering or Xv. If you have a shadowfb in system memory and the CPU is doing the rendering and then the GPU renders to an OpenGL or Xv buffer, you then have to deal with updating your shadowfb in system memory with the results of the 3D rendering in GPU memory. you still end up ping-ponging.

EXA does not even provide the infrastructure to accelerate gradients or trapezoids at the moment. They weren't used much previously because very few drivers (if any) implemented acceleration for them; hence they weren't tested much outside of sw rendering. Software rendering works because it's the reference implementation; that's how the features like gradients were added in the first place. Glamor has the infrastructure to support gradients and trapezoids already which is another reason it is attractive.

Comment

If you have a shadowfb in system memory and the CPU is doing the rendering and then the GPU renders to an OpenGL or Xv buffer, you then have to deal with updating your shadowfb in system memory with the results of the 3D rendering in GPU memory. you still end up ping-ponging.

Hmm, seems like this is really difficult for me to understand. Why would we even need to "update shadowfb in system memory with the results of the 3D rendering in GPU memory"? Doesn't it make a lot more sense to just set up something like periodic DMA transfers from shadowfb system memory to GPU memory (synchronized with screen refresh) and composite it with 3D or Xv buffers together on the GPU side before sending the combined pixel data to HDMI? Or is there some kind of hardware limitation here? Maybe I just need to check the hardware manuals first and not waste your time with silly questions.

Glamor has the infrastructure to support gradients and trapezoids already which is another reason it is attractive.

OK, in any case it's nice to have multiple alternatives. So you are of course free to try your luck with Glamor. With only one important condition: we need much better tools for automated correctness validation of RENDER implementations. Otherwise it is a real recipe for disaster