If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Mac OS X 10.6.3 Packs Some Regressions

03-30-2010, 09:30 AM

Phoronix: Mac OS X 10.6.3 Packs Some Regressions

Yesterday we published Mac OS X 10.6.2 vs. Ubuntu 10.04 benchmarks. Overall, both the latest Mac OS X and Ubuntu Linux operating systems were competitive with one another, but there were a few strong points to each OS. As luck would have it, Apple finally introduced the Mac OS X 10.6.3 update that brought a variety of maintenance updates to Snow Leopard. There were a few rumored performance-related changes as well as addressing "compatibility issues with OpenGL-based applications" that was supposed to mean graphics driver updates, so we decided to run our automated tests atop Mac OS X 10.6.3 to look for any differences.

I've got another question: I was told Mac OS X only does support OpenGL 2.1 with some pieces of OpenGL 3.0. How is that possible when the OpenGL code between Linux, Windows and Mac are shared within the proprietary graphics drivers?

Comment

I've got another question: I was told Mac OS X only does support OpenGL 2.1 with some pieces of OpenGL 3.0. How is that possible when the OpenGL code between Linux, Windows and Mac are shared within the proprietary graphics drivers?

Because Apple supplies a single, common OpenGL state tracker, written by themselves. IHVs only provide the lower, hardware-specific parts of the stack which can, indeed, be shared among OSs.

Comment

Because Apple supplies a single, common OpenGL state tracker, written by themselves. IHVs only provide the lower, hardware-specific parts of the stack which can, indeed, be shared among OSs.

Yeah, the way Apple does it is very interesting. Part of the stack is bytecode that is compiled with LLVM. This means that on a good graphics card, all the graphics code runs on the GPU, but on hardware that lacks certain hardware features (like the crappy Intel graphics chips) it can be run on the CPU, which is far faster than normal software rendering.

As for the regressions, I'm sure they'll be fixed soon - especially with Steam coming to Mac OS this month.

Comment

Yeah, the way Apple does it is very interesting. Part of the stack is bytecode that is compiled with LLVM. This means that on a good graphics card, all the graphics code runs on the GPU, but on hardware that lacks certain hardware features (like the crappy Intel graphics chips) it can be run on the CPU, which is far faster than normal software rendering.

As for the regressions, I'm sure they'll be fixed soon - especially with Steam coming to Mac OS this month.

Comment

Update: it seems that the Phoronix test doesn't tell the whole picture. Ben commented on the X-Plane blog:

Phoronix reported a performance penalty with the new update; I do not know the cause of this or whether the fps_test=3 bug could be causing it. But their test setup is very different than mine - a GeForce 9400 on a big screen, which really tests shading power. My setup (an 8800 on a small screen) tests vertex throughput, since that has been my main concern with NV drivers.

My suggestion is to use --fps_test=2 if you want to differential 10.6.2 vs. 10.6.3. I'll try to run some additional bench-marks soon!

EDIT: Follow-up. I set the X-Plane 945 time demo to 2560 x 1024, 16x FSAA, and all shaders on (e.g. let's use some fill rate). I put the Cirrus jet on runway 8 at LOWI, then set paused forward full screen no HUD. In this configuration, I see these results:
Objects 10.5.8 10.6.3
none 85 fps 100 fps
a lot 46 fps 61 fps
tons 37 fps 42 fps
Note that in the "no objects" case the sim is fill-rate bound - in the other two it is vertex bound. So it looks to me like 10.6.3 is faster than 10.5.8 for both CPU use/object throughput and perhaps fill rate (or at least, fill-rate heavy cases don't appear to be worse).