Traditionally, Mesa drivers each have to have their own OpenGL implementations written for them. This creates a lot of duplicate code. Gallium3D solves this problem by separating the API (known as a state tracker) from the core driver. This allows driver developers to focus mainly on hardware interaction, not on graphics APIs.

Gallium3D has already been adopted by:

VMware (for Linux guests)

Nouveau project (open source NVIDIA drivers)

ATI (R300 driver nearly complete, R600 in the works)

Change History

We discussed this internally, and we weren't convinced that it would gain us a lot. It would mean throwing away our existing DRI driver code (admittedly the existing code is not the nicest, but it works) and starting from scratch without any clear reason to think that the result would be better. In particular, since we pass on graphics pipeline commands and data to the host's OpenGL implementation it seemed that Gallium3D might actually be a worse fit.

If you know more about this then please feel free to add to this ticket of course.

You guys weren't convinced that you would gain a lot? What you gain is whatever a state tracker out there supports. Given that this hyptothetical driver doesn't support what a state tracker needs, then LLVM is generate a suitable software fallback, and this has proven very efficient so far. Current state trackers for Gallium3D include:

Thing is we already have our own state tracker for opengl 2.1 and going gallium way looks pointless untill there're host drivers based on gallium for most gpu's on most OS'es, else we'd end up inventing a code which will have to revert gallium driver commands stream back to opengl api level which isn't that easy task.

And what of OpenVG and the upcomming OpenCL and VDPAU state trackers? Right now you guys have OpenGL 2.1 and that's a good start, but you can expose your users to so much more by "biting the bullet" and implementing this driver. All you have to worry about is a "hardware" driver, and not implementing any APIs on top of it.

"All you have to worry about is a "hardware" driver"
This is exactly the issue, "pointless untill there're host drivers based on gallium for most gpu's on most OS'es".

So, for example, how much of new ati/nvidia/intel chips have gallium based drivers for windows 7? macos? - 0.
I can imagine that *maybe* mentioned companies will write gallium drivers for their modern gpu's one day, but I highly doubt anyone would ever care about old cards. And that would mean no support for old hardware and no support for osses without gallium framework at all.

So as I wrote earlier benefits/effort is too low at the moment to switch our 3d acceleration support into a gallium based model.

If you desperately want a gallium based approach, you could contribute to our project by providing the necessary code.

I don't think that their situation is really comparable, as they did not not start out with working 3D support, but bought up a company specialised in 3D drivers to do it for them - who naturally enough made use of what they already had/were working on. We do have a working implementation, and throwing it away and starting again from scratch is hardly the best way to reduce code duplication, unless you don't count rewriting thrown away code as duplication. And you must understand that 3D support is just one of the things we do, not our main development focus :)

Out of interest, do you have an immediate application that would need Gallium3D support in a virtual machine that isn't served by the current solution?

I do not currently require Gallium3D for anything; it would merely be a convenience.

Out of interest, how much work are you guys going to have to do to implement OpenGL 3 when the time comes? I have to admit that I'm extremely confused as to why you would ever want to maintain a driver and multiple APIs for it instead of just...a driver.

I encourage you to briefly scan this presentation. It's old, and Gallium3D has done much maturing since then:

I doubt anyone "requires" Gallium3D for anything right at the moment- the truth of the matter is that the drivers for anything other than the softpipe backend are still very much in an infant state and will be for another 6-12 months.

It should be noted, though, that much more than the three mentioned in the bug are going to be providing Gallium3D support. Intel's driver is going to be that way and it's going to be how drivers and 3D get done within Linux in that timeframe I mentioned above. It's in keeping with how the "Big Two" do things within their own drivers- and I'd think there's a reason why they did the driver the way they did. You don't need to drag about the API edges, otherwise known as state trackers, with your drivers.

Within Gallium3D, it'd be one driver that hooked into the GL and D3D state trackers within Windows. The same would be true for OpenGL, OpenGL ES 1.1 and 1.2 for Linux. And, so on and so forth. With Gallium3D, you'd be writing the code for hooking into either a Gallium3D or host OS 3D layer subsystem- once. Then you'd be largely done with things- you wouldn't need to maintain the state tracker piece for Linux or Windows...or any other OS where the API they use has a state tracker already done. You'd just recompile the tracker for the OS if they didn't already provide it for you. The only work would be providing the needed Gallium3D pieces once.

What you've got now works, but one wonders if it's maintainable over the long haul...

As an additional observation, as a developer, sometimes while you have a working implementation, sometimes you need to refactor or discard the stuff when something better comes along- or something at least more maintainable.

As for handing you guys the code...well... If we had time ourselves, perhaps... I would do so, if it weren't for the fact that I'm quite busy myself working on porting games to Linux and trying to lever up another two business ventures on top of that.

I think the bug poster really just wants it on your roadmap- you've got what's arguably one of the best if not the best virtualization products out there, hands down. It'd be a shame if your 3D support stagnated because you didn't see the value in a framework that's very much how the 3D vendors actually DO things within their drivers- just because you saw redoing things as "throwing away something that works".

Until Gallium3D has matured there's not that much to consider. We currently have a working solution and are willing to consider other options if there are clear benefits. We'll wait and see what happens with Gallium3D and revisit the subject in the future. It's simply too early right now.

What I also miss is where did you get information that say Intel is going to implement gallium3d based driver and even got a timeframe of 6-12 months?
From what I can read in mailing lists it's quite opposite, read for example http://lists.freedesktop.org/archives/mesa-dev/2010-April/000039.html
And the reasons stated there are quite similiar to ours.

There're some community efforts which are working on some gallium3d based drivers but according to the benchmarks those are much slower compared to closed source hw vendor ones.

To my knowledge, the only really finished hardware Gallium3D driver is for the R300 and it's already (generally) faster than it's legacy-Mesa cousin. At least that's what the benchmarks at Phoronix show:

Also keep in mind that we need a solution for Linux, Windows, Solaris and Mac OS X guests and hosts. Including legacy guests.

The original intent of the ticket was for this driver to replace the legacy-Mesa driver. This, to my knowledge, will not work on a Mac OS X guest, which is currently a bear to get working anyway.

Gallium3D has already been proven to work on Windows using an in-house Direct3D 9 state tracker. VMware is developing Direct3D 10 and 11 state trackers for Gallium3D on Windows, but we aren't going to have access to those as they won't be open source. Oracle would have to develop their own. It would still be a far cleaner solution than the current WINE hack.

OpenSolaris isn't going to see access Gallium3D until that distro is made libdrm-compatible. To my knowledge, this isn't the case. Until they get this done, they aren't going to have open-source NVIDIA support, either, as that too is Gallium3D-based. R800 support is also very likely to not exist for OpenSolaris, seeing as how AMD has a strong eye on this new graphics framework for that driver as well.

I think I forgot to mention one very critical thing: VMware translates Gallium3D's internal language to OpenGL (on Linux hosts). You guys already have a Chromium pipeline in place, right?

I thought that you meant by that that the driver code you posted a link to above does that conversion (or what was your intention in mentioning that?), but that code seems to me to convert to a proprietory pipeline format based on DirectX, in particular using DirectX shader bytecode - I assume that the OpenGL conversion you mentioned is done by closed source code inside of VMWare itself.
Just to re-iterate what Sander said, we will certainly look at doing a Gallium driver whenever that makes sense to us. We won't do it just to "keep up with the Jones's" though.

I do wonder slightly whether you are underestimating the amount of work (read developer time) involved in what you are requesting? Including in relation to maintaining our current drivers. Although we are grateful of course for your concern for reducing our development effort!

You certainly seem serious about this. Feel free to keep us updated about the results, or to ask us for any VirtualBox-related background information that you need (although probably rather on the vbox-dev mailing list than on this bug ticket).

There is a i915 driver that has at least been partially implemented and the work is on right now (from Intel employees, mind...) for an i965 driver- the link I handed to you is a commit record to the Git repo for Mesa. Intel's doing Gallium3D. :-D

As for our erstwhile bug reporter... I'll see what insights I can lend to his efforts.

I'm afraid that the situation is still pretty much what it was. We have a working solution which supports everything that we have a business case to support; we have a good idea of how much maintenance effort our current solution is; and we don't have the spare resources available to produce a new 3D driver without a clear business case while still maintaining the old one until it was in a usable state (which we would have to obviously, and which is a rather unpredictable time frame).

I seem to recall that you were planning to have a go at a G3D driver yourself - I am interested to know how far you got, and what you learned/what the problems were which stopped you getting further. For the record, I have also investigated Gallium3D a bit, and it seems to me to be less than ideal for our purposes. For a start, it is not an API, but a framework for writing drivers, and one which evolves over time. So we would need to either have our driver upstream (which we have not yet committed to, not least because it would need a decision about keeping our ABI stable) or we would need to fork Gallium3D, which means forking Mesa. Which leads to the next problem: the main purpose of Gallium3D (yes, I know there are lots of secondary ones) is to produce DRI drivers. However, DRI does not have a stable API either, but is tied into Mesa (and is therefore dependent on the Mesa version) in a way which I don't feel very happy with. So forking Mesa would lead to problems as soon as we wanted to be compatible with the DRI subsystem installed on a random guest system. Our current DRI driver uses some not very nice hacks to get around that, and I am more inclined to spend time making those hacks nicer than trying to get two different and random versions of Mesa to play together.

I would be interested to hear your thoughts about this, but I don't think that we are likely to rethink our position just now.

My attempts at making a Gallium3D driver stopped when my campus faculty told me I was biting off more than I could chew, and they were probably right. I wanted to create it as a senior project.

As far as Aero goes, VMware seems to be using Gallium3D for their Windows drivers, and they evidently have good WDDM support. How it ties into their proprietary Direct3D 9 state tracker, I am unsure.

I would be interested in seeing how your current solution compares in performance to LLVMpipe. I've been comparing some of Michael Larabel's bechmarks on Phoronix, and it looks like Gallium3D's software rasterizer might be faster than what you guys currently have. I'll look more into this when I don't have classes.

Oh and here's something I've failed to bring up: What about KMS? Major distros are switching over to Wayland.

I think I would like a bit more than "VMware seems to be using Gallium3D for their Windows drivers" to convince me that Gallium3D will be helpful for Windows drivers (we support Aero now, and currently our main 3D development effort is getting rid of Windows glitches). I have trouble believing that it won't be a lot of work just to get a Windows Gallium3D driver to the point our current drivers are at now.

I don't think that switching to Gallium3D would magically improve our performance, but rather that profiling of our stack as it is to look for the bottlenecks would be a better time investment. On my todo list but not at the top.

KMS is independent of Gallium3D, and also on my todo list but not at the top. The first commits have already been done, but it is not yet in anything like a working state.

I had already read the second article, and was aware of what they described in the first (I have superficially studied how VMWare's virtual card works). Obviously in our case things are not very different except that we hook into the 3D stack at a higher level, and that we don't bother translating things to look like we are programming real hardware, but instead pass the OpenGL through more directly. I haven't yet looked at what OpenGL 3.0 support would need (answering an earlier question), but it looks to me like it is mainly a few more commands to be passed through, and checking whether any assumptions we make currently become invalid due to semantics changes.

And your statement that using Gallium3D should let us "focus more time and effort on a smaller piece of functionality" still seems highly speculative to me. I already mentioned a few of the reasons why I think it would bring additional work which we don't have now, so I won't bother adding more (there are more). In any case the upstream/downstream issue is most likely a sufficient blocker, even disregarding the fact that upstream may not be as well disposed towards us as they are to other driver maintainers. Again, out of interest, do you know of anything I don't which indicates that maintaining a Gallium3D driver out of tree is likely to be doable without causing more trouble than it is worth?

Actually I would now like to put a question back to you. Can you tell me whether my understanding is correct that a DRI2 driver based on Gallium3D will only be compatible with the version of Mesa (possibly even the build of Mesa) that it was built against? I realise that DRI2 was originally supposed to be an independent ABI, but as far as I can see they never managed to break the Mesa dependency.

Note: actually the DRI drivers do only depend on a small, well-defined Mesa API which can be replicated by other libraries (and is faked by the GLX loader). There are two different versions of the interface (TLS and non-TLS; technically there is a third, non-threaded, but the non-TLS version also supports that), but that is more manageable than random dependencies.

Okay, I got VMware Workstation for Linux and it's running fine except for one small problem: I don't have any 3D acceleration in my Windows 7 guest. This seems to be because I'm missing some OpenGL extensions: S3TC texture compression (patented) and/or framebuffer object support.

If these are required for efficient TGSI->OpenGL translation, then perhaps my original request is in error. I'm running unstable Gentoo on this thing, and they're usually pretty good about keeping X.Org and Mesa up-to-date.