If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

NVIDIA Performance: Windows vs. Linux vs. Solaris

Phoronix: NVIDIA Performance: Windows vs. Linux vs. Solaris

Earlier this week we previewed the Quadro FX1700, which is one of NVIDIA's mid-range workstation graphics cards that is based upon the G84GL core that in turn is derived from the consumer-class GeForce 8600 series. This PCI Express graphics card offers 512MB of video memory with two dual-link DVI connections and support for OpenGL 2.1 while maintaining a maximum power consumption of just 42 Watts. As we mentioned in the preview article, we would be looking at this graphics card's performance not only under Linux but also testing this workstation solution in both Microsoft Windows and Sun's Solaris. In this article today, we are doing just that as we test the NVIDIA Quadro FX1700 512MB with each of these operating systems and their respective binary display drivers.

Excellent review guys. SPECViewPerf is a standard benchmark for me and the industry I work in (I am the sysadmin at a small studio called Kanuka).

Currently Kubuntu Linux on NVidia hardware is the standard workstation rollout for Kanuka, with a few MacOSX machines and one WindowsXP Pro box for 3DSMax work. Vista has been completely shunned from the studio, and for good reason as these benchmarks show.

This review mirrors a lot of our own in-house testing, and reaffirms for us that we made the right decision not only for flexibility reasons, but also for performance reasons.

You were right about the aggressive part - it was raping my laptop for half hour, and it was making quite a bit of weird squeaky noises.

I did notice that it was using both of my cpu's though, each at half-power - is my setup messed up, or was it really supposed to use the CPU?

OpenGL is single threaded (indeed, multi-threading any realtime API is damn near impossible).

The benchmark is designed to stress your GPU (video card), but with that said it will need the CPU to feed it the data it needs. Modern video cards are still fairly primitive in what they can process. While they have been offloading "Transformation and Lighting" effects, and can now do pixel shaders and whatnot, most geometry calculation and texture preparation is still done by the host system in software, and then fed to the GPU at a later time to do the final rasterization to the output framebuffer, and then finally your screen.

What you saw was the single thread jumping back and forth between your two processors, which is not all that uncommon on any multiprocessor system. And yes, it's perfectly normal behaviour for SPECViewPerf. Most of the models in SPECViewPerf have much higher poly counts (some by several orders of magnitude) than anything you'll find in a modern video game. As such, the CPU must work a lot harder to feed the GPU the information it needs to draw the scene.

I still have yet to graph this data, but I did get two things - a) starting games on a separate X server (log out, switch session to failsafe terminal, start game) can be put into the "compiling your own kernel" category - useless on newer hardware, since the difference is minimal and not even clear, and b) same thing for compiz - it looks like there's no opengl & compiz conflict on the nvidia card, games run just fine, and there was not much diff with compiz on/off.