If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

EXA actually works quite well when it's done properly. I'm not sure what benefits UXA really intends to offer.

It's good to keep some perspective about this: this doesn't suddenly make EXA worthless. EXA is still a good architecture and, for example, on radeon (up through r500), it provides very fast 2d. I'd like to see what benefits can be had from UXA. They'd have to be pretty darn good to justify replacing EXA in a bunch of drivers that already have it.

I keep wondering when we'll move to a unified 2d/3d stack, in other words, when 2d API is just a subset of the 3d API.

It's probably worth reading Keith's blog before getting too upset here. UXA is a prototype Keith is putting together to try to figure out the best way to integrate EXA acceleration with a GEM-style memory manager *and* make it interoperate cleanly with a compositor. Once it is done I think you will see discussion about whether the changes can be pushed back into EXA or whether a new interface is needed.

I keep wondering when we'll move to a unified 2d/3d stack, in other words, when 2d API is just a subset of the 3d API.

I wonder if this is already the case with nvidia...There is a known problem with 2d on 8/9 series cards and the fact that these have far less dedicated 2d hardware causes the slowness which is less drastic under previous generations. The problem could be due to poor mapping of 2d calls to the gpu itself...

Unfortunately nvidia is not an open driver and we will never know for sure. What gets to me is just that all these acceleration architectures are a series of false starts which end up (sort of) deprecated before they go anywhere.

-Exa has been around for ages and only recently intel got it performing well.
-Where is glucose? or xorg 7.4 for that matter? .
-Is gallium going to survive long enough to reach critical mass? or are they just going to junk it just as drivers start to use it?

Keith's blogs are always a bit over my head, but I gather that the purpose of UXA is to solve some of the interface issues between the 2d and 3d ends of the driver to help GLX_EXT_texture_from_pixmap work better and enable GEM to handle allocation for 2d.

In that case, it's not really a new acceleration architecture at all, but just a refactoring of some of the backend assumptions in EXA? Is that about right?

People complain about ATI's drivers a lot, but they're the only ones that actually follow standards. Intel rewrites them constantly, and Nvidia ignores them completely. I wish the devs could finally make some decisions and implement stuff.

People complain about ATI's drivers a lot, but they're the only ones that actually follow standards. Intel rewrites them constantly, and Nvidia ignores them completely. I wish the devs could finally make some decisions and implement stuff.

The problem then seems that the standards don't always seem to work, that's why Intel keeps re-writing them and nvidia keeps ignoring them .