I'm currently testing 4.5GHz with 1.25V without any dynamic clock or voltage adjustments, it's at those levels on idle and load. Idle temp is 35C or so and Load (AIDA64 Stress Test with FPU testing) is 67-69C. I let it run for 1.5h and it leveled out at that value, CPU never throttled.

What voltage do you need for 4.8GHz?

I have a feeling that my CPU is above average and I could probably push a bit more but I need 100% stability. I don't want a crash during a 45-70min points race in iRacing! So I probably leave it at 4.5GHz, the testing so far looks promising.

Awww, Slawter, you went down to the pleb chipset. No more Extreme Editions for you anymore? But yeah, single GPU systems don't need all those PCI-E lanes. Does the Z87 still require the hack to enable PCI-E 3.0 for NVIDIA GPUs?

Awww, Slawter, you went down to the pleb chipset. No more Extreme Editions for you anymore? But yeah, single GPU systems don't need all those PCI-E lanes. Does the Z87 still require the hack to enable PCI-E 3.0 for NVIDIA GPUs?

Yes, the Z87 is so much better than the old X79. Intel has such an amazing platform with the real Ivy Bridge E series but the Desktop versions are really disappointing. Mix that together with the old X79 and you really have an unattractive solution.

Until Intel brings back worthy Extreme Editions with modern chipsets, it's not even worth looking at them. It's such a shame.

Haswell has enough PCIe lanes for normal SLI, even the previous desktop platform had that.
More are only required for 3 or 4 way systems.

And no, nVidia automatically enables PCIe 3.0 in their drivers. This is also valid for the new Ivy Bridge E. Only Sandy Bridge E had that problem. The chipset does not matter because the PCIe lanes come from the CPUs.

Haswell has enough PCIe lanes for normal SLI, even the previous desktop platform had that.
More are only required for 3 or 4 way systems.

I think there was a case that at Surround resolutions, there was a sizable difference between 8x8 PCI-E lanes vs 16x16 for dual GPU systems, but that might have been at the 2.0 spec.

Quote:

Originally Posted by slaWter

And no, nVidia automatically enables PCIe 3.0 in their drivers. This is also valid for the new Ivy Bridge E. Only Sandy Bridge E had that problem. The chipset does not matter because the PCIe lanes come from the CPUs.

D'oh, I knew that, I probably should have worded it if the 4770K still required the hack.

I think there was a case that at Surround resolutions, there was a sizable difference between 8x8 PCI-E lanes vs 16x16 for dual GPU systems, but that might have been at the 2.0 spec.

That could be some cases. However:
During my Tri-SLI testing the cards usually ran at x8 PCIe 2.x spec. Even extremely demanding tests showed no difference between x16/x8/x8 or x8/x8/x8 or x16/x16 (when using only two cards). There might be a few frames on the line but nothing major.

x8/x8 PCIe 3.0 like on Haswell for example is easily enough for a two card solution.

AFAIK, only Sandy Bridge E was "blocked" by nVidia for PCIe 3.0 because that CPU went through testing/certification before the PCIe specification was finalized.