If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

For my personal projects (I'm game developer) I see the same results - OGL apps has similar speed or little better performance on Linux on both Windows and Linux platforms.

@tecknurd
What a bulshit... I have an access to many AMD cards (all models from HD2k, all from HD3k, HD4670-4850, two from HD5k series, two from HD6k series and one from HD7k series and don't see any problems with drivers when I test a games) and some from NV (one card from GF8, one from GF9 and GF200), but my main platform base on AMD Radeon because it allow me to keep OGL specification really well. From time to time I test my code on other AMD cards and NV because as Kano said this is really important step (I don't have an access to Intel GFX). I remember when I implemented Texture Arrays on NV and those drivers accepted even broken code (similar situation is related to eg. glGenerateMipmap and many more), tweaks in NV are really bad...

Comment

We don't need (well-intentioned) directions, we need manpower. We are 4 people working on the d3d code. All of us are paid by CodeWeavers, without that our work wouldn't be possible. We do have some other responsibilities as well(e.g. school in my case), so we're only working part time on the d3d code.

What you can do to help:

If you know your way around C and OpenGL and have a game that doesn't work, try to fix it.

This can be a tricky task, but we can help you and give you hints. Contact wine-devel@winehq.org if there are any issues.

If you're not a developer, but don't mind compiling Wine from git, run your games with the git code and bisect and report any regressions you find.

If you're doing the above on top of the open source drivers, use Mesa git as well.

We need QA help on OS X.

I have an automated performance monitoring setup, but it needs to run on many more systems. If you're willing to help here, please get in contact with us.

One annoying factoid is that I spend about one fifth of my time maintaining ~12 operating system installations(Windows, Linux, OS X) on five different computers just to have different GPUs and drivers for testing.

Fglrx manages to keep up in Unigine, especially in GPU-limited setups - and Unigine doesn't need a high resolution to be GPU limited. It does lose to Windows by a small margin in Unigine when the performance is CPU limited. It's the other games where the big differences shows up.

Comment

We don't need (well-intentioned) directions, we need manpower. We are 4 people working on the d3d code. All of us are paid by CodeWeavers, without that our work wouldn't be possible. We do have some other responsibilities as well(e.g. school in my case), so we're only working part time on the d3d code.

What you can do to help:

If you know your way around C and OpenGL and have a game that doesn't work, try to fix it.

This can be a tricky task, but we can help you and give you hints. Contact wine-devel@winehq.org if there are any issues.

If you're not a developer, but don't mind compiling Wine from git, run your games with the git code and bisect and report any regressions you find.

If you're doing the above on top of the open source drivers, use Mesa git as well.

We need QA help on OS X.

I have an automated performance monitoring setup, but it needs to run on many more systems. If you're willing to help here, please get in contact with us.

One annoying factoid is that I spend about one fifth of my time maintaining ~12 operating system installations(Windows, Linux, OS X) on five different computers just to have different GPUs and drivers for testing.

Your situation is understandable. Have you try to ask for help outside volunteers? I mean there are many companies that have interest for Wine to succeed like Intel, Google, RedHat, Canonical, and others. Can you tell them that you need manpower? Have you try to ask Intel, AMD, Nvidia to modify their driver so you not need HLSL bytecode to GLSL translation. I think there is potential, and i prefer a new Wine version Instead a new Kernel version.

Comment

Guess what my posts here are intended to achieve :-) . We do get help from outside individuals and companies. The problem with games is that there are just so many games and graphics cards that it is impossible to test, fix and QA all of them. Interest from other companies is mostly focused on productivity applications. This is why we need lots of help from volunteers.

At CodeWeavers we have some statistics which games our users run. World of Warcraft leads the pack. At less than 1% of total share. The entire thing is a fairly flat distribution. Every customer wants a different game.

Wrt GLSL vs assembler: GLSL is not the problem. I prefer an excellent GLSL compiler over vendor-specific assembler extensions. But we really need an excellent compiler that goes 100% of the way, not a mediocre one that gets 80% of the use cases right. This 80% vs 100% consideration applies to all other areas of OpenGL, and is the main difference between the Nvidia driver and all others.

We have very limited flexibility in avoiding corner cases. If a game hits a d3d corner case, it will hit the same corner case in OpenGL. E.g. Diablo III uses a depth texture as texture and depth buffer simultanously. The depth test is on, but depth write is off. This is legal in both d3d and gl. Nvidia and r600g get this right. Fglrx does not. We cannot work around this bug. Yes, we could in theory create a new texture, but this makes the code messy, fixes one game and breaks 5 others. Believe me, we tried.

Likewise, if a GPU has hardware support for 256 hardware vertex shader constants, the game requires 254 for one of its shaders, and the driver consumes 4 for its private use, then this aint work. 254 + 4 > 256. This affects many drivers for dx9 cards, and is a real pain on OSX. r300g is slightly better here, but only Nvidia gives us all 256 constants it advertises.

Comment

We have very limited flexibility in avoiding corner cases. If a game hits a d3d corner case, it will hit the same corner case in OpenGL. E.g. Diablo III uses a depth texture as texture and depth buffer simultanously. The depth test is on, but depth write is off. This is legal in both d3d and gl. Nvidia and r600g get this right. Fglrx does not. We cannot work around this bug. Yes, we could in theory create a new texture, but this makes the code messy, fixes one game and breaks 5 others. Believe me, we tried.

Do you guys at CodeWeavers have direct contact with AMD's fglrx team for bug reports?

Comment

Do you guys at CodeWeavers have direct contact with AMD's fglrx team for bug reports?

Yes. Usually we file bugs at their inofficial bugzilla and nudge them about it. Personally I prefer to put the effort into fixing r600g though.

The bug I was talking about here was reported as http://ati.cchtml.com/show_bug.cgi?id=426. I don't know if Matteo has made any further effort to get the bug fixed, but I assume he has. We also have access to their beta drivers.

With both Nvidia and AMD it requires a bit of luck to get bugs fixed in time. I guess it depend on their internal workload. Apple is really bad here, as I've explained at FOSDEM.

Comment

We have very limited flexibility in avoiding corner cases. If a game hits a d3d corner case, it will hit the same corner case in OpenGL. E.g. Diablo III uses a depth texture as texture and depth buffer simultanously. The depth test is on, but depth write is off. This is legal in both d3d and gl. Nvidia and r600g get this right. Fglrx does not. We cannot work around this bug. Yes, we could in theory create a new texture, but this makes the code messy, fixes one game and breaks 5 others. Believe me, we tried.

Having just hit that exact thing myself two weeks ago, it's an undefined thing according to the GL standard, and causes the loss of all early-Z/hi-Z optimizations on my card (hd4k). Changing my code not to do that gave around 1000x speedup, all on r600g.

I agree it's not Wine's place to do hacks like that, but this one is really not valid GL. Blame Blizzard (or not, since it is legal in DX).

Comment

We don't need (well-intentioned) directions, we need manpower. We are 4 people working on the d3d code. All of us are paid by CodeWeavers, without that our work wouldn't be possible. We do have some other responsibilities as well(e.g. school in my case), so we're only working part time on the d3d code.

What you can do to help:

If you know your way around C and OpenGL and have a game that doesn't work, try to fix it.

This can be a tricky task, but we can help you and give you hints. Contact wine-devel@winehq.org if there are any issues.

If you're not a developer, but don't mind compiling Wine from git, run your games with the git code and bisect and report any regressions you find.

If you're doing the above on top of the open source drivers, use Mesa git as well.

We need QA help on OS X.

I have an automated performance monitoring setup, but it needs to run on many more systems. If you're willing to help here, please get in contact with us.

One annoying factoid is that I spend about one fifth of my time maintaining ~12 operating system installations(Windows, Linux, OS X) on five different computers just to have different GPUs and drivers for testing.

I take you up on the first 3 :-) Or I'll try to. Lot's of learning needed on my side.