Today marks the release of Intel's newest CPU architecture, code named Skylake. I already posted my full review of the Core i7-6700K processor so, if you are looking for CPU performance and specification details on that part, you should start there. What we are looking at in this story is the answer to a very simple, but also very important question:

Is it time for gamers using Sandy Bridge system to finally bite the bullet and upgrade?

I think you'll find that answer will depend on a few things, including your gaming resolution and aptitude for multi-GPU configuration, but even I was surprised by the differences I saw in testing.

Our testing scenario was quite simple. Compare the gaming performance of an Intel Core i7-6700K processor and Z170 motherboard running both a single GTX 980 and a pair of GTX 980s in SLI against an Intel Core i7-2600K and Z77 motherboard using the same GPUs. I installed both the latest NVIDIA GeForce drivers and the latest Intel system drivers for each platform.

Skylake System

Sandy Bridge System

Processor

Intel Core i7-6700K

Intel Core i7-2600K

Motherboard

ASUS Z170-Deluxe

Gigabyte Z68-UD3H B3

Memory

16GB DDR4-2133

8GB DDR3-1600

Graphics Card

1x GeForce GTX 980
2x GeForce GTX 980 (SLI)

1x GeForce GTX 980
2x GeForce GTX 980 (SLI)

OS

Windows 8.1

Windows 8.1

Our testing methodology follows our Frame Rating system, which uses a capture-based system to measure frame times at the screen (rather than trusting the software's interpretation).

If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online. Rather than using FRAPS to measure frame rates or frame times, we are using an secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more.

This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options. So please, read up on the full discussion about our Frame Rating methods before moving forward!!

While there are literally dozens of file created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own.

BioShock Infinite is a first-person shooter like you’ve never seen. Just ask the judges from E3 2011, where the Irrational Games title won over 85 editorial awards, including the Game Critics Awards’ Best of Show. Set in 1912, players assume the role of former Pinkerton agent Booker DeWitt, sent to the flying city of Columbia on a rescue mission. His target? Elizabeth, imprisoned since childhood. During their daring escape, Booker and Elizabeth form a powerful bond -- one that lets Booker augment his own abilities with her world-altering control over the environment. Together, they fight from high-speed Sky-Lines, in the streets and houses of Columbia, on giant zeppelins, and in the clouds, all while learning to harness an expanding arsenal of weapons and abilities, and immersing players in a story that is not only steeped in profound thrills and surprises, but also invests its characters with what Game Informer called “An amazing experience from beginning to end."

Our Settings for Bioshock Infinite

Though the GTX 980 on the Core i7-6700K is 8% faster than it is on the 2600K, you can also see some differences in frame time consistency. Look at the FPS by Percentile graph where the orange line representing Sandy Bridge tails off sooner than the black line, representing Skylake.

At 2560x1440, all of this deviation between Skylake and Sandy Bridge is essentially gone, thanks to the added weight given to the GPU by the additional pixels. This isn't entirely surprising but does hint that maybe gamers with older systems that are using higher resolution screens may yet not need to upgrade.

GTX 980 SLI Results - 2560x1440

Well this is interesting - as we dive into the world of SLI and the hiccups of multi-GPU, even at 2560x1440, things start to look better for Skylake. In this case we find that the Core i7-6700K has about a 10% higher frame rate on average than Sandy Bridge and also much tighter frame time consistency.

In your opinion, would this make any difference in 4K? I'm runing Titan X in SLI at 4K, with 4K G-sync monitor smoothing the whole experience out. I am running on Sandy Bridge 3930K. Would upgrading to Skylake make any difference in my usage scenario?

My advice would NOT be to change your CPU. Instead, for games which drop below 60FPS drop the resolution to 2560x1440 which not only may look IDENTICAL or very close but also will boost the frame rate significantly.

I've done a lot of testing on 4K and have found it extremely difficult to find ANY which look better to me over 1440p including CIV5 which has really small text.

(and going forward we'll see a slow shift to DX12 which will likely eliminate much of the need to upgrade your current CPU)

That's a friend's PC. For myself, I won't be considering 4K since I'll want a high refresh rate such as the Acer Predator 1440p GSYNC monitor.

Other:
You may want to investigate how to force a frame rate cap for games so you always stay in asynchronous mode. I'm not sure how that works myself though since I've had no access to a GSYNC monitor. I've heard people say they could force (globally?) to something like 135Hz on a 144Hz panel as apparently it had to be slightly below the max refresh.

Ryan? I know this is a out of the box comparison, but at the pricepoint you can compare a 3930k with the 6700k. Here you can run PCIE 3.0 (with a well known tiny tool) and clock the CPU at 4 / 4.2 GHz. Now your Frametimes will be a lot better than with the old-fashioned 2600k (clocking at 3.4 / 3.8 GHz). There is no need for a sidegrade from SB-E / IVY-E to Skylake. Would youtu.be please make a Benchmark like this???

Thanks for this Ryan - I have a 970 SLI / 2600k setup and was curious what 6700k would do. PCI-express 2.0 vs 3.0 is one difference that might matter here.. Also with the Oculus Rift in mind, it looks like Skylake gives a bit of a boost to minimum FPS -- which is going to be key for the right experience there. I'd like to see more investigation on minimum FPS as VR takes off next year..

As for making a case vs. 2600k -- other than the I/O portion of skylake, and larger memory capability for high end use cases with DDR4, I don't see a compelling case for 6700k vs 2600k. Some of the difference is clock speed (4.0/4.2 vs 3.5/3.9) - and Sandy Bridge can definitely match Skylake OC or not on air easily.

In any case, I was really happy to see this 980 SLI comparison of the two chips - thanks PCPer!

Am I missing the overclocking? Why not compare the overclocked performance, Sandy Bridge clocks way higher than Skylake from what I've heard.
Also I'm guessing not a lot of people are running the 2600k stock, hell I even run my non k 2600 overclocked and I never run into a situation where I feel I am lacking performance.
Was this a deliberate choice?

Word. All these new reviews are comparing unlocked processors at stock clocks. Were not idiots. I bet there is little to no performance increase if they compared the CPUs @ minimum 4.5Ghz, which my 2500k is running at.

Why would a "clock for clock" comparison matter? The video/article is clearly trying to say that it's time to upgrade from Sandy Bridge to Skylake, but if a Sandy Bridge processor can OC to 4.5-4.8 GHz and perform at a much higher level, it makes the whole upgrading debate, clearly displayed here, irrelevant.

Still don't see a definite reason to upgrade yet if you have a Sandy. Much better to wait for Zen (+few proper DX12 games) and then decide. If it proves worthy, go for Zen, if it doesn't, then consider Skylake at better prices and possible discounts by then (same for DDR4 kits).

There really should be NEGLIGIBLE performance differences between Windows 7, 8 and 10 except for some niche cases which have software optimization issues.

I remember Windows 8 working better than Windows 7 for Battlefield 4 with Intel processors due to a core parking issue which appeared to be TRUE.

Anyway, if there is any truth to the performance difference again it should NOT be representative of most gaming or other application scenarios, or else there is a serious bug or other issue not sorted out due to the new CPU.

I ran all 4 (8.0) and tested each with a myriad of games. 7 was great, but lots of overhead. As time went on, that overhead shrank, and now 10 is doing marvelously, until a recent update. Intel no longer supports win10 under the SandyBridge cores. http://www.intel.com/support/graphics/sb/CS-034343.htm

GJ on your Skylake coverage. IMO PCPER is by far the best tech site out there.

Who runs 2500K/2600K stock? It would have been interesting to see 2600K at 4.5-5GHz to see how it compares to a stock (and OC'd 6700K).

Most people on Sandy Bridge compare their overclocked chip with the stock speed of whatever the new i7 is. It's hard to justify buying a new CPU/mobo/RAM when you have to overclock it just to see a decent improvement.

I'm running a 6 core Westmere chip (4GHz) on an X58 mobo and the only thing compelling me to perhaps upgrade is the feature set on the motherboards. Old versions of USB, SATA, PCIe and no UEFI make me a sad panda.

I suppose it's a good thing my setup is still decent, ever since I moved in with my girlfriend I never seem to have any money :(

You are interpreting my comment correctly. You can look at any set of benchmarks on different CPUs and find that CPUs fairly quickly hit a "good enough" point with high end graphics cards where there is really no difference between them. I think a stock 2600k is below that point (and has been for some time), primarily due to its low clock frequency.

I'd be *extremely* surprised to see significant differences between an overclocked 2600k and an overclocked 6700k, however. I would expect to see no more than a 1-5% difference between the two at 1080p+.

I really like pcper in general, but you guys lost a lot of credibility with this article due to not comparing overclocked speeds. It comes across as you were paid off by Intel to write this article to make Skylake look better than it is.

They will overclock to the same frequency.. which means they would then be on a even playing field minus the ipc improvements.

A stock 3.3ghz 2600k to a 4ghz 6700k is a bit different than running the 2600k and the 6700k at 4.5-4.7 ghz.

Which is what we are trying to point out. The gap would lessen in this situation and the test conclusions would be more accurate. Not only that but you need to use the SAME amount of ram in both test benches. 8 vs 16 will also cause issues in 2k res expecially on games like gta which is a poor port.

I suggest you try the suggested things and then give us an actual comparison that doesn't appear to just be pushing for sales.

I too would like to see overclocking results from Sandy Bridge compared. To do this test without those numbers is absolutely criminal. The difference between my 4.7GHz i5 2500K and this new processor has to be fairly minimal, and besides that, I found GTA V to be extremely playable @ 2560x1440 so I don't believe it was far exceeding 4ms anymore. Come on guys! Re-test with max overclocks, maybe include these numbers for comparison too.

Then you will have to go with the Xeon options at a higher cost. You will get no support for ECC in consumer SKUs from Intel, as that competes with the workstation part of their business. Expect to dole out more for the motherboard, and a motherboard without all the overclocking options at that for the ECC capabilities. Until AMD can begin to field its Zen based SKUs with even more integration with its GPUs, and even the future ability to directly dispatch FP computations directly to the GPU from the CPU cores, things will not improve on the consumer side of the equation. Even if Zen can just come up to Sandybridge levels, that HSA ability to send calculations to the GPU will be what puts Intel at more that just a price disadvantage even with the current generation HSA 1.0 compliance and not even the future direct dispatching of FP workloads directly to the GPU. There are plenty of Ivy bridge and Haswell parts that will be on sale, and not as may Boadwell because of the delays in 14nm, but why pay for the latest from Intel when it does not beat the previous SKUs from Intel by a wide enough margin to justify the cost of a new motherboard, just upgrade to an earlier Ivy-bridge, Haswell, or Broadwell and wait it out.

I guess if I was anxious to get into 4K then, I would upgrade. But I am perfectly happy with my i7-860 and GTX 660Ti right now at 1080p resolutions. How many people are still on older hardware like mine, or even older. You'd be very surprised.

I'm on a LGA771 Xeon E5450 @ 3.85ghz in an old Asus matx lga775 board with 8gb ddr2 and Geforce 560ti and I still make it work with battlefield 4 at 1080p, although I'm about to drop the coin on a new Skylake system.

I would be interested to see how much of the differences are due to memory speed and PCI-e speed. It isn't really relevant to a purchasing decision, since you can't separate the processor from the rest of the platform improvements. It may be interesting to turn the memory clock down and run a few test, if you have the time. I don't know if it is possible explicitly set PCI-e 2.0 mode though. Some off the platform power consumption differences may be due to lower DDR4 power consumption also.

I need to see GTA V tested with an equal 16GB of RAM before I can trust those results. That game chews through memory.

I would also like to echo the people asking for testing with Sandybridge overclocked. The whole point of the K parts is to overclock them so it seems odd not to test that(although I completely understand the time constraints you had).

I couldn't make a buying decision without having these questions answered to be honest.

Come on guys... why no clock for clock testing? I don't know a single person who ran a 2600k at stock speeds, and it's clocked quite a bit lower at stock than the 6700k. Fail review is fail... nothing but useless info here.

Run them both at 4.7+ GHz and then you'd have some meaningful information.

For what it's worth, I am running heavily overclocked Titan X SLI at 4k. I upgraded from a 4.8 GHz 2600k to a 4.6 GHz 5930k and the differences were minimal (I did have a PLX Z68 motherboard however). Crysis 3 got a little higher FPS in the very CPU intensive sections, and GTA V got a little smoother at the same FPS, and that was about it.

I appreciate the testing very much but at the same time I strongly disagree with your conclusion.
Do you recommend people spend $600 for a new graphics card that delivers somewhere between 7% and 25% improvement depending on scenario?
Because that's what you're recommending for the motherboard, CPU and RAM combination.

I would guess that most PCPers that are still using their SB chips are doing so because they are great OCers, with most people getting around ~4.5ghz.

To not factor that into the analysis of an article whose sub title is "Is it time for gamers using Sandy Bridge system to finally bite the bullet and upgrade?" is a pretty glaring oversight and turns what could have been a extremely helpful and useful article into a curiosity.

My 3930K @ 4.6 still spoils me with insane FPS. Upgrading from this chip is a really bad idea right now. Just went to 980ti SLI @ 1440p/144hz and its so fast it just spoils me really. If Skylake-E manages to impress, maybe I'll consider upgrading to that just for the lulz.

First off I'm running an i5 2500k running at 4.5 ghz and an AMD R9 280. If you are playing the games you want to play without any problems, smooth game play, etc. Why would you even consider upgrading any part of your computer. What is the real world difference between a stable 60fps and 120fps visually? No difference! Stop being ridiculous in recommending upgrades that don't benefit the vast majority of gamers operating at 1080p, because they won't see a difference worth a $600 price tag. As someone else mentioned it looks like you are being paid by intel to make these suggestions and comparisons when it has very little impact on 1080p gamers. I won't be upgrading my processor anytime soon. The video card will be my next upgrade when my game play gets closer to 30fps with new games. Which I estimate will be at least 2 years from now according to the trends. DX12 improvements might even make that upgrade further in the future. So, let's stick to real world comparisons in the future!

Yes i have to agree with everyone here that has stated that the benchmarks/tests were not performed at Overclocked speeds or with the same amount of memory. Just to show you the difference a percentage increase with my 2600k going from your stock Apple to apples Skylake reviews shows a dramatic improvement in my SiSoft Dhrystone and Whetstone scores of my 2600k at 4750mhz 35.71% faster clockspeed with scores for Dhrystone going from a score 116.86 @ 3500 Mhz to a score of 186.59 a amazing 59.6% increase @ 4750 Mhz, as for the Whetstone it went from a score of 73.44 @ 3500 Mhz to a score of 97.41 @ 4750 Mhz a 32.64% increase which is closer to the clockspeed % increase than the Dhrystone test that amazed me with a 59.6% increase I cannot account for since I was expecting it to be close like the clockspeed % change of 35.71%. I would be greatful to anyone who can account for the dramatic percentage increase of 59.6% over the clockspeed percentage increase. Now this is just the percentage increase 3.5ghz to 4.75ghz on the 2600k. Also I am repeating what everyone else said no one buys a 2500-2600k to run it at stock speeds Heck if you give me that Skylake platform i will give you my 5.1+ ghz capable 2600k I keep cool with a nice and simple but great performing Cooler Master Nepton 140xl with the push pull fans at 40% getting air through its 38mm thick radiator which is silent for the most part. It out cools most 240mm "not 280mm" radiators with its powerful pump and very large copper Cold Plate that has more microfins than any DIY water cooling systems CPU block according to FrostyTech i believe i read that from. I Have to give you a huge Thumbs up for adding SLI to the test even though you did not OVERCLOCK the 5 year old KING of Mainstream CPU's Good Ole SANDY BRIDGE CPU for the tests The Sandy 2600k is the best CPU I have ever purchased and I do not think I will ever get the payback that CPU has and is STILL giving me in performance on a 24/7 daily basis. Another fantastic thing about SKYLAKE is that the removed the idiotic on die VRM's and they are back on the Motherboard nice and big. I feel Haswell's on die VRM's has caused more CPU failures and chip degradation problems that I rarely if ever really hear about until Intel put those Tiny VRM's in the CPU die itself not only being too small to put too many volts through them and they add heat to the CPU die. Luckily Skylake has them back on the motherboard where thay can be cooled correctly and it can help you choose a motherboard for you....If you have 2 motherboards that have everything you need and need something to help you make you mind up you pick the motherboard WITH THE MOST VRM'S THUS BEST POWER DELIVERY SYSTEM FOR THE cpu LEADING TO LESS vDROOP ETC.

I am not going to do anymore SiSoft tests because it is making me want to clock my 2600k to 5100mhz and then clock my SLI'ed EVGA GTX 770 Classified cards to 1400mhz cores and and 8000 memory clocks and do some benchmarking which I do not really need since I am using a LG 34UM95 34" 21/9 3440-1440 IPS monitor with 8bit to 10bit color by dithering and a 60htz cap "tried overclocking the panel with no luck" so I do not use Vsync but I do set a frame rate target of 67fps with EVGA's Precision software " it saves me a lot of unneeded power use it keep the cards cool since they are not pushing out every frame they can possibly put out 120+fps all the time" and I get no tearing or stuttering and every game is buttery smooth. Yes I did do a 30 minute test with the Gsync enasbled 34" 3440-1440 Predator monitor, but nothing I had time to play ran under 60fps and my current rig runs everything above 60FPS with my main game right now being War Thunder Ground Forces that has fantastic graphics and gameplay...blows World of Tanks outta the water, plus if your into it you can fly planes also. It includes

I think this test isn't exactly even. First off you didn't do a test of the performance on these cpu's in their overclocked state. As these are K processors chances are the people that would want these comparisons will be overclocking them. So a base vs base is already biased since they have vastly different base clocks.

I can speak from experience that most 2600k can reach 4.5ghz and of those most can reach 4.7k with decent cooling. I mention this because as the cpu speed goes up the less likely the cpu will be to bottleneck the gpu's.

Next thing i noticed in your test is the difference in RAM. 8 gig vs 16 gig. While some might not think this matters I have seen some of these games eat up over 8 gigs easy when run at 1440P or higher resolutions.

So my point here is your test beds were not as similar as possible. You might not be able to use the same ram or speed of ram, but you should have atleast matched the amount of ram on both machines.

I think if you were to do the two things I suggest you would see different results. Their might still be an advantage to skylake, but I think the game will be much more minimal. 1-4% would be my guess.