Today we are going to talk about eight graphics accelerators with proprietary designs that we managed to get in for review. All of them are based on the recently launched Nvidia GeForce GTX 660 Ti GPU.

Power Consumption

We measured the power consumption of our testbed equipped with different graphics cards using a multifunctional Zalman ZM-MFC3 panel which can report how much power a computer (without the monitor) draws from a wall outlet. There were two test modes: 2D (editing documents in Microsoft Word or web surfing) and 3D (three runs of the Metro 2033: The Last Refuge benchmark at 2560x1440 with maximum image quality settings, but without antialiasing).

Here are the results:

The systems with different GeForce GTX 660 Ti cards do not differ much in terms of their power consumption. The Zotac is the only exception, obviously due to its pre-overclocked memory. Otherwise, the difference is within 10 to 12 watts, which is negligible considering the total power draw of about 400 watts. Thus, we can suppose that all the claims of the manufacturers about their products being especially economical are nothing but marketing tricks. The overclocked GeForce GTX 660 Ti from Gigabyte needs 14 watts more than at the default frequencies, almost reaching the level of the GeForce GTX 670. The system with Sapphire Radeon HD 7950 has the highest power draw here, yet it needs no more than 450 watts anyway.

Testbed Configuration and Testing Methodology

All participating graphics cards were tested with the following testbed configuration:

As we can see, the clock frequencies of these graphics cards have been increased above the nominal level, but we chose not to lower them to the nominal, because almost all the today’s testing participants have increased frequencies, too.

In order to lower the dependence of the graphics cards performance on the overall platform speed, we overclocked our 32 nm six-core CPU with the multiplier set at 37x, BCLK frequency set at 125 MHz and “Load-Line Calibration” enabled to 4.625 GHz. The processor Vcore was increased to 1.49 V in the mainboard’s BIOS:

Hyper-Threading technology was enabled. 16 GB of system DDR3 memory worked at 2 GHz frequency with 9-11-10-28 timings and 1.65V voltage.

The test session started on September 14, 2012. All tests were performed in Microsoft Windows 7 Ultimate x64 SP1 with all critical updates as of that date and the following drivers:

Since there are ten participants in our today’s test session we will only check their performance in one resolution - 1920x1080 pixels. Nevertheless, the tests were performed in two image quality modes: “Quality+AF16x” – default texturing quality in the drivers with enabled 16x anisotropic filtering and “Quality+ AF16x+MSAA 4(8)x” with enabled 16x anisotropic filtering and full screen 4x or 8x antialiasing if the average frame rate was high enough for comfortable gaming experience. We enabled anisotropic filtering and full-screen anti-aliasing from the game settings. If the corresponding options were missing, we changed these settings in the Control Panels of Catalyst and GeForce drivers. We also disabled Vsync there. There were no other changes in the driver settings.

The list of games and applications used in this test session was shortened to include one popular semi-synthetic benchmarking suite and seven most recent and most resource-demanding games of various genres with all updates installed as of the beginning of the test session date:

If the game allowed recording the minimal fps readings, they were also added to the charts. We ran each game test or benchmark twice and took the best result for the diagrams, but only if the difference between them didn’t exceed 1%. If it did exceed 1%, we ran the tests at least one more time to achieve repeatability of results.