I think there are many of us that had the same idea. Unless needing to upgrade due to malfunction or new laptop purchase, holding C2D til past the i-Series was the best move to make; whereas buying into C2D asap was the best move at the time.

Still going to wait for prices to fall and more USB3 adoption. Expected new purchase: mid-2011-mid 2012Reply

Ya know I usually do as you are but was an early adopter of the i7 920. Looking now it seems I made the right choice. I have had 2 years of kickassery and my processor still holds up rather well in this article.Reply

Me too! I've got an e8400 running at 3.9 with almost zero OC know-how and its done me well. I might snap up an i7 if they and their mobos get cheap when sandy bridge has been out a few months... but may well skip that generation all together.Reply

Holy crapola AMD really needs Bulldozer now. Even in heavily threaded video encoding the 2600K at $300 is blowing the 1100T x6 out of the water. This is the the Core 2 Duo vs. A64 X2 all over again. Will Bulldozer be another Phenom, a day late and a dollar short? TLB bug anyone? As a PC enthusiast I really want to see competition to keep prices in check. If I had to upgrade today, I can't see how I could turn down the 2600K...Reply

Yeah, new Intel motherboard models are never cheap. I don't understand why the price remains so high when more an more functionality is moving to the CPU. The other killer is that you need a new board for every Intel CPU update.

Lastly, it's hard to throw the "buy now" tag on it with AMD's new architecture over the horizon. Sure, AMD has a tough act to follow, but it's still an unknown that I think is worth waiting for (if it's a dog, you can still buy Intel). Keep in mind that Bulldozer will have a pretty strong IGP, one that may make decent IGP gaming a reality. It will become a matter of how powerful the x86 portion of the Bulldozer is, and they are trying a considerably different approach. Considering the amount of money you'll be paying, you might as well see how AMD shakes out. I guess it just depends on if what you have today can get you by just a little longer.Reply

You're conflating Bulldozer and Llano there. Bulldozer is the new architecture, coming to the desktop as an 8-core throughput monster. Llano is the first desktop APU, cramming 4 32nm K10.5 cores and a Redwood class GPU onto the die. The next generation of desktop APUs will be using Bulldozer cores.Reply

i think that amd 880g mainbord with cpu araound 90 dolars plus some 55xx series gpu can do better in terms of encoding decoding video playback games etc. and all that without alot of money spend on inetl new socekets wich you have to trow away when they make the next cpu.So please corect me if i am wrong

I'm wondering how supply will be on release day? Often we see new components with low supply and online stores start price gouging from day one. New Egg is particularly known for such. Lets hope supply is very good off the bat. That 2600K looks really appealing to me.Reply

One of the local computer stores had Sandy Bridge parts up for sale last week, but they're all gone now save for a few Asus P8P67 standard, pro, and deluxe boards.

I wasn't able to see what kind of money they were asking.

This review has convinced me that once the 2600K shows up again it's all I'll need. I was going to wait for socket 2011 but damn, the 2600 is already more than twice as fast in everything than my poor ol' Q6600.Reply

"As an added bonus, both K-series SKUs get Intel’s HD Graphics 3000, while the non-K series SKUs are left with the lower HD Graphics 2000 GPU."

Doesn't it seem like Intel has this backwards? For me, I'd think to put the 3000 on the lesser performing CPUs. Users will probably have their own graphics to use with the unlocked procs, whereas the limit-locked ones will more likely be used in HTPC-like machines.Reply

This seems odd to me unless they're having yield problems with the GPU portion of their desktop chips. That doesn't seem too likely though because you'd expect the mobile version to have the same problem but they're all 12 EU parts. Perhaps they're binning more aggressively on TDP, and only had enough chips that met target with all 12 EUs to offer them at the top of the chart.Reply

I agree with both of you. This should be the ultimate upgrade for my E8400, but I can't help thinking they could've made it even better if they'd used the die space for more CPU and less graphics and video decode. The Quick Sync feature would be awesome if it could work while you're using a discrete card, but for most people who have discrete graphics, this and the HD Graphics 3000 are a complete waste of transistors. I suppose they're power gated off so the thermal headroom could maybe be used for overclocking.Reply

Great review, but does anyone know how often 1 active core is used. I know this is a matter of subjection, but if you're running an anti-virus and have a bunch of standard services running in the background, are you likely to use only one core when idling?

What should I advise people, as consumers, to really pay attention to? I know when playing games such as Counter-Strike or Battlefield: Bad Company 2, my C2D maxes out at 100%, I assume both cores are being used to achieve the 100% utilization. I'd imagine that in this age, hardly ever will there be a time to use just one core; probably 2 cores at idle.

I would think that the 3-core figures are where the real noticeable impact is, especially in turbo, when gaming/browsing. Does anyone have any more perceived input on this? Reply

According to Bench, it looks like he used 1680×1050 for L4D, Fallout 3, Far Cry 2, Crysis Warhead, Dragon Age Origins, and Dawn of War 2, and 1024×768 for StarCraft 2. I couldn't find the tested resolution for World of Warcraft or Civilization V. I don't know why he didn't list the resolutions anywhere in the article or the graphs themselves, however.Reply

Games are usually limited in fps by the level of graphics, so processor speed doesn't make much of a difference unless you turn the graphics detail right down and use an overkill graphics card. As the point of this page was to review the CPU power, it's more representative to use low resolutions so that the CPU is the limiting factor.

If you did this set of charts for gaming at 2560x1600 with full AA & max quality, all the processors would be stuck at about the same rate because the graphics card is the limiting factor.

I expect Civ 5 would be an exception to this because it has really counter-intuitive performance.Reply

For almost any game, the resolution will not affect the stress on the CPU. It is no harder for a CPU to play the game at 2560x1600 than it is to play at 1024x768, so to ensure that the benchmark is CPU-limited, low resolutions are chosen.

For instance, the i5 2500k gets ~65fps in the Starcraft test, which is run at 1024x768. The i5 2500k would also be capable of ~65fps at 2560x1600, but your graphics card might not be at that resolution.

Since this is a review for a CPU, not for graphics cards, the lower resolution is used, so we know what the limitation is for just the CPU. If you want to know what resolution you can play at, look at graphics card reviews.Reply

Which is why the tests have limited real world value. Skewing the tests to maximize the cpu differences makes new cpus look impressive, but it doesn't show the reality that the new cpu isn't needed in the real world for most games.Reply

Maybe I missed this in the review, Anand, but can you please confirm that SB and SB-E will require quad-channel memory? Additionally, will it be possible to run dual-channel memory on these new motherboards? I guess I want to save money because I already have 8GB of dual-channel RAM :).

This has been discussed in great detail. The i7, i3, and i5 2XXX series is dual channel. The rumor mill is abound with SB-E having quad channel, but I don't recall seen anything official from Intel on this point.Reply

I wonder why though? Is this just officially? I can't really see a good technical reason why CPU OC would work with P67 but not H67 - it is just turbo going up some more steps after all. Maybe board manufacturers can find a way around that?Or is this not really linked to the chipset but rather if the IGP is enabled (which after all also is linked to turbo)?Reply

I just checked the manual to MSI's 7676 Mainboard (high-end H67) and it lists cpu core multiplier in the bios (page 3-7 of the manual, only limitation mentioned is that of CPU support), with nothing grayed out and overclockability a feature. As this is the 1.1 Version, I think someone misunderstood something....

Unless MSI has messed up its Manual after all and just reused the P67 Manual.... Still, the focus on over-clocking would be most ridiculous.Reply

yep. This is IMHO extremely stupid. Wanted to build a PC for someone that mainly needs CPU power (video editing). An overclocked 2600k would be ideal with QS but either wait another 3 month or go all compromise...in that case H67 probably but still paying for K part and not being able to use it.Intel does know how to get the most money from you...Reply

I'm surprised nobody cares there's no native USB 3.0 support coming from Intel until 2012. It's obvious they are abusing their position as the number 1 chip maker, trying to push Light Peak as a replacement to USB. The truth is, Light Peak needs USB for power, it can never live without it (unless you like to carry around a bunch of AC adapters).Intel wants light peak to succeed so badly, they are leaving USB 3.0 (it's competitor) by the wayside. Since Intel sits on the USB board, they have a lot of pull in the industry, and as long as Intel wont support the standard, no manufacturer will ever get behind it 100%. Sounds very anti-competitive to me.Considering AMD is coming out with USB 3.0 support in Llano later this year, I've already decided to jump ship and boycott Intel. Not because I'm upset with their lack of support for USB 3.0, but because their anti-competitive practices are inexcusable; holding back the market and innovation so their own proprietary format can get a headstart. I'm done with Intel.Reply

Sure, if you're building a desktop you can find plenty with USB 3.0 support (via NEC). But if you're looking for a laptop, most will still not have it. For the fact that manufacturers don't want to have to pay extra for features, when they usually get features via the chipsets already included. Asus is coming out with a handful of notebooks in 2011 with USB 3.0 (that I know of), but wide-spread adoption will not be here this year.Reply

Most decent laptops will have USB3. ASUS, Dell, HP, Clevo, and Compal have all used the NEC chip (and probably others as well). Low-end laptops won't get USB3, but then low-end laptops don't get a lot of things.Reply

Even the netbooks usually have USB 3.0 these days and those almost all use intel atom CPUs. The cost to add the controller is negligible for large manufacturers. USB is not going to be the deciding factor for purchases.Reply

Your claims are pretty silly seeing as how USB came about in the same way that Light Peak did-Intel invented USB and pushed it to legacy ports like PS/2, and slowly phased out support for the older ones entirely over the years. It makes no sense for them to support USB 3.0, especially without a real market of devices.But motherboard manufacturers will support USB 3.0 via add-in chips. I don't see how this anti-competitive at all, why should intel have to support a format it doesn't think makes sense? So far USB 3.0 hasn't really shown speeds close to it's theoretical, and the only devices that really need the higher bandwidth are external drives that are better off being run off E-SATA anyways. There's no real "killer app" for USB 3.0 yet.BTW Light Peak will easily support adding power to devices, so it definitely does not need USB in order to provide power. There'll just be two wires running alongside the fiber optics. Reply

The eSata + USB (power) connector has never gone anywhere, which means that eSata devices need at least 2 cables to work. Flash drives and 2.5" HDs don't need enough power to require an external brick, and 80-90% of eSata speed is still much better than the USB2 bottleneck. With double the amount of power over USB2, USB3 could theoretically be used to run 3.5" drives with a double socket plug freeing them from the wall as well.Reply

If QuickSync is only available to those using the integrated GPU, does that mean you cant use QS with a P67 board, since they don't support integrated graphics? If so, I'll end up having to buy a dedicated QS box (a micro-ATX board, a S or T series CPU seem to be up to that challenge). Also what if the box is headless (e.g. Windows Home Server)?

Does the performance of QS have to do with the number of EUs? The QS testing was on a 12-EU CPU, does performance get cut in half on a 6-EU CPU (again, S or T series CPUs would be affected).

No mention of Intel AVX functions. I suppose thats more of an architecture thing (which was covered separately), but no benchmarks (synthetic or otherwise) to demo the new feature. Reply

I'm not that interested in playback on that device - its going to be streamed to my PS3, DLNA-enabled TVs, iPad/iPhone, etc. Considering this wont be supported as a hackintosh for a while, I might as well build a combo transcoding station and WHS box. Reply

It wouldn't surprise me if that was intentional. I would hope that Anandtech reviewers were not letting companies dictate how their products were to be reviewed lest AT be denied future prerelease hardware. I can't tell from where I sit and there appears to be no denial that stating there is no such interference.

In addition, real world benchmarks aside from games looks to be absent. Seriously, I don't use my computer for offline 3D rendering and I suspect that very few other readers do to any significant degree.

Also, isn't SYSMark 2007 a broken, misleading benchmark? It was compiled on Intel's compiler, you know the broken one that degrades performance on AMD and VIA processors unnecessarily. Also there is this bit that Intel has to include with its comparisons that use BAPco(Intel) benchmarks that include Intel's processors with comparisons to AMD or VIA processors:

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchase, including the performance of that product when combined with other products.

It isn't perfect, but that is what the FTC and Intel agreed to, and until new benchmarks are released by BAPco that do not inflict poor performance on non-Intel processors, the results are not reliable. I don't see any problem if the graph did not contain AMD processors, but that isn't what we have here. If you are curious, for better or for worse, BAPco is a non-profit organization controlled by Intel.Reply

Hardware vendors have no input into how we test, nor do they stipulate that we must test a certain way in order to receive future pre-release hardware. I should also add that should a vendor "cut us off" (it has happened in the past), we have many ways around getting supplied by them directly. In many cases, we'd actually be able to bring you content sooner as we wouldn't be held by NDAs but it just makes things messier overall.

Either way, see my response above for why the 1100T is absent from some tests. It's the same reason that the Core i7 950 is absent from some tests, maintaining Bench and adding a bunch of new benchmarks meant that not every test is fully populated with every configuration.

As far as your request for more real world benchmarks, we include a lot of video encoding, file compression/decompression, 3D rendering and even now a compiler test. But I'm always looking for more, if there's a test out there you'd like included let me know! Users kept on asking for compiler benchmarks which is how the VS2008 test got in there, the same applies to other types of tests.

Thanks for replying to my comment. I was understand why the review was missing some benchmarks for processors like the 1100T. I was also a bit hasty in my accusations with respect to interference from manufacturers, which I apologize for.

I still have trouble with including benchmarks compiled on the Intel compiler without a warning or explanation of what they mean. It really isn't a benchmark with meaningful results if the 1100T is used x87 code and the Core i7-2600K used SSE2/SSE3 code. I would have no problem with reporting results for benchmarks compiled with Intel's defective compiler, like SYSmark 2007 and Cinebench R10 assuming they did not include results for AMD or VIA processors along with an explanation of why they were not applicable to AMD and VIA processors. However, not giving context to such results I find problematic.Reply

There are a few holes in the data we produce for Bench, I hope to fill them after I get back from CES next week :) You'll notice there are some cases where there's some Intel hardware missing from benchmarks as well (e.g. Civ V).

Seems Intel did everything right for these to fit snuggly into next gen macs. Everthing nicely integrated into one chip and the encode/trascode speed boost is icing on the cake (If supported of course) being that Apple is content focused. Nice addition if youre a mac user.Reply

Except for the whole thing about not knowing if the GPU is going to support OpenCL. I've heard Intel is writing OpenCL drivers for possibly a GPU/CPU hybrid, or utilizing the new AVX instructions for CPU-only OpenCL.

Other than that, the AT mobile SNB review included a last-gen Apple MBP 13" and the HD3000 graphics could keep up with the Nvidia 320M - it was equal to or ahead in low-detail settings and equal or slightly behind in medium detail settings. Considering Nvidia isn't going to rev the 320M again, Apple may as well switch over to the HD3000 now and then when Ivy Bridge hits next year, hopefully Intel can deliver a 50% perf gain in hardware alone from going to 18 EUs (and maybe their driver team can kick in some performance there too). Reply

"Unlike P55, you can set your SATA controller to compatible/legacy IDE mode. This is something you could do on X58 but not on P55. It’s useful for running HDDERASE to secure erase your SSD for example"Or running old OSes.Reply

Sounds like this is SO high end it should be the server market. I mean, why make yet ANOTHER socket for servers that use basically the same CPU's? Everything's converging and I'd just really like to see server mobo's converge into "High End Desktop" mobo's. I mean seriously, my E8400 OC'd with a GTX460 is more power than I need. A quad would help with the video editing I do in HD but it works fine now, and with GPU accelerated rendering the rendering times are totally reasonable. I just can't imagine anyone NEEDING a home computer more powerful than the LGS-1155 socket can provide. Hell, 80-90% of people are probably fine with the power Sandy Bridge gives in laptops now.Reply

Perhaps it is like you say, however it's always good for buyers to decide if they want server-like features in a PC. I don't like manufacturers to dictate to me only one way to do it (like Intel does now with the odd combination of HD3000 graphics - Intel H67 chipset). Let us not forget that for a long time, all we had were 4 slots for RAM and 4-6 SATA connections (like you probably have). Intel X58 changed all that: suddenly we had the option of having 6 slots for RAM, 6-8 SATA connections and enough PCI-Express lanes. I only hope that LGA 2011 brings back those features, because like you said: it's not only the performance we need, but also the features. And, remeber that the software doesn't stay still, it usualy requires multiple processor cores (video transcoding, antivirus scanning, HDD defragmenting, modern OS, and so on...). All this aside, the main issue remains: Intel pus be persuaded to stop luting user's money and implement only one socket at a time. I usually support Intel, but in this regard, AMD deserves congratulations!Reply

LGA 2011 is a high end desktop/server convergence socket. Intel started doing this in 2008, with all but the highest end server parts sharing LGA1366 with top end desktop systems. The exception was quad/octo socket CPUs, and those using enormous amounts of ram using LGA 1567.

The main reason why LGA 1155 isn't suitable for really high end machines is that it doesn't have the memory bandwidth to feed hex and octo core CPUs. It's also limited to 16 PCIe 2.0 lanes on the CPU vs 36 PCIe 3.0 lanes on LGA2011. For most consumer systems that won't matter, but 3/4 GPU card systems will start loosing a bit of performance when running in a 4x slot (only a few percent, but people who spend $1000-2000 on GPUs want every last frame they can get), high end servers with multiple 10GB ethernet cards and PCIe SSD devices also begin running into bottlenecks.

Not spending an extra dollar or five per system for the QPI connections only used in multi-socket systems in 1155 also adds up to major savings across the hundreds of millions of systems Intel is planning to sell.Reply

I'm confused by the upset over playing video at 23.967hz. "It makes movies look like, well, movies instead of tv shows"? What? Wouldn't recording at a lower frame rate just mean there's missed detail especially in fast action scenes? Isn't that why HD runs at 60fps instead of 30fps? Isn't more FPS good as long as it's played back at the appropriate speed? IE whatever it's filmed at? I don't understand the complaint.

On a related note hollywood and the world need to just agree that everything gets recorded and played back at 60fps at 1920x1080. No variation AT ALL! That way everything would just work. Or better yet 120FPS and with the ability to turn 3D on and off as u see fit. Whatever FPS is best. I've always been told higher is better.Reply

You are right about having more detail when filming with higher FPS, but this isn't about it being good or bad, it's more a matter of tradition and visual style.The look movies have these days, the one we got accustomed to, is mainly achieved by filming it in 24p or 23.967 to be precise. The look you get when filming with higher FPS just doesn't look like cinema anymore but tv. At least to me. A good article on this:http://www.videopia.org/index.php/read/shorts-main...The problem with movies looking like TV can be tested at home if you got a TV that has some kind of Motion Interpolation, eg. MotionFlow called by Sony or Intelligent Frame Creation by Panasonic. When turned on, you can see the soap opera effect by adding frames. There are people that don't see it and some that do and like it, but I have to turn it of since it doesn't look "natural" to me.Reply

Why is that Quick Sync has better scaling? Very evident in the Dark Knight police car image as all the other versions have definite scaling artifacts on the car.

Scaling is something that should be very easy. Why is there so big a difference? Are these programs just made to market new stuff and no-one really uses them because they suck? So big scaling differences between codepaths make no sense.Reply

It looks to me like some of the encodes have a sharpening effect applied, which is either good (makes text legible) or bad (aliasing effects) depending on your perspective. I'm quite happy overall with the slightly blurrier QS encodes, especially considering the speed.Reply

Who needs the IGP for a tuned-up desktop PC anyway? Some for sure, but I see the main advantages of the SB GPU for business laptop users. As the charts show, for desktop PC enthusiasts, the GPU is still woefully slow, being blown away even by the (low-end) Radeon 5570. For this reason, I can't help feeling that the vast majority of overclockers will still want to have discrete graphics.

I would have preferred to dual core (4-thread) models to have (say) 32 shaders, instead of the 6 or 12 being currently offered. At 32nm, there's probably enough silicon real estate to do it. I guess Intel simply didn't want the quad core processors to have a lower graphics performance than the dual core ones (sigh).

Pity that the socket 2011 processors (without a GPU) are apparently not going to arrive for nearly a year (Q4 2011). I had previously thought the schedule was Q3 2011. Hopefully, AMD's Bulldozer-based CPUs will be around (or at least imminent) by then, forcing intel to lower the prices for its high-end parts. On the other hand, time to go - looks like I'm starting to dream again...Reply

Using myself as an example showing the drawback of limiting overclocking on H67 would be the lack of a good selection of overclocking-friendly micro-ATX boards due to most, if not all, of those being H67.

Granted, that's not Intel's fault.

It's just that I have no need for more than one PCIe x16 slot and 3 SATA (DVD, HDD, SSD). I don't need PCI, FDD, PS2, SER, PAR or floppy connectors at all.

Which ideally means I'd prefer a rather basic P67 design in micro-ATX format but those are, currently, in short supply.

The perfect motherboard, for me, would probably be a P67 micro-ATX design with the mandatory x8/x8 Crossfire support, one x1 and one x4 slot, front panel connector for USB 3, dual gigabit LAN and the base audio and SATA port options.

Keep in mind though, that the over-clocking issue may not be as bad as pointed out. There are H67 boards being marketed for over-clocking ability and manuals showing how to adjust the multiplier for CPUs... I'm not yet convinced over-clocking will be disabled on H67.Reply

Major bummer as I was going to order a Gigabyte H67 board and an i5-2500K but am put off now. They seem to over-clock so well and with low power consumption that it seemed the perfect platform for me…I don’t mind paying the small premium for the K editions but being forced to use a P67 and lose the graphics and have difficulty finding a mATX P67 board seems crazy!

I wonder if this limit is set in the chipset or it can be changed with a BIOS update?Reply

Looking at the Intel software encoding and the AMD encoding, it looks like the AMD is more washed out overall, which makes me think there's actually something related to colorspaces or color space conversion involved....

Are you guys sure there's no PC/TV mixup there with the luminance or ATI using the color matrix for SD content on HD content or something like that?Reply

Thanks for the excellent run down of Sandy Bridge. As i have a x58 system i'm going to skip it and see what happens in Q4 . X58 has been a good platform and lasted longer than most others in recent years.Reply

I've thought it over...and i don't believe that H67 only support GPU overclocking.Like what others said, buy a "K" cpu to get HD3000 graphic and cannot overclock...and on the other side, those with P67 buy unlocked "K" CPU get HD3000 but cannot use...then what's the point of making HD3000 graphics?Reply

As they pointed out, with the Z series motherboard you can have both. That said, it does seem stupid that Intel would launch with those official guidelines, and in these comments others are saying some H67 motherboards are allowing the CPU multiplier to be changed.Reply

As tempting is this chip looks, my 3.8 GHz Core 2 Quad is still more CPU than I can really use most of the time. I wonder if we're reaching the point where improved compute performance is not really necessary for mainstream and even most enthusiast users.

In any case, the upcoming 6-core/12-thread variant sounds interesting. Maybe I'll upgrade to that if Intel doesn't assign it to the $999 price point.Reply

Look at the PCI-e x16 from the CPU. Intel indicates a bandwidth of 16GB/s per line. That means 1GB/s per line.But PCI-e v2 has a bandwidth of 500MB/s per line only. Thats mean that the values that Intel Indicates for the PCI-e lines are the sum of the upload AND download bandwidth of the PCI-e.

Thats means that the PCI-e lines of the chipset run at 250MB/s speed! That is the bandwidth of the PCI-e v1, and Intel has done the same bullshit with the P55/H57, he indicates that they are PCI-e v2 but they limits their speed to the values of the PCI-e v1:

Because 2.0 speed for the southbridge lanes has been reported repeatedly (along with a 2x speed DMI bus to connect them), my guess is an error when making the slides with bidirectional BW listed on the CPU and unidirectional BW on the southbridge.Reply

I think the OP is referring to Intel Insider, the not-so-secret DRM built into the sandy bridge chips. I can't believe people are overlooking the fact that Intel is attempting to introduce DRM at the CPU level and all everyone has to say is "wow, I can't WAIT to get one of dem shiny new uber fast Sandy Bridges!"

I have a q9400, if I compare it to the 2500K in bench and average (straight average) all scores the 2500K is 50% faster. The 2500K has a 24% faster base clock, so all the architecture improvements plus faster RAM, more cache and turbo mode gained only ~20% or so on average, which is decent but not awesome taking into account the c2q is 3+ year old design (or is it 4 years?). I know that the idle power is significantly lower due to power gating so due to hurry up and wait it consumes less power (cant remember c2q 45nm load power, but it was not much higher than this core 2011 chips).

So 50%+ faster sounds good (both chips occupy the same price bracket), but after equating clock speeds (yes it would increase load and idle power on the c2q) the improvement is not massive but still noticeable.

I will be holding out for Bulldozer (possibly slightly slower, especially in lightly threaded workloads?) or Ivy Bridge as mine is still fast enough to do what I want, rather spend the money on adding a SSD or better graphics card.Reply

I think the issue with the latest launch is the complete and utter lack of competition for what you are asking. Anand's showed that the OC'ing headroom for these chips are fantastic.....and due to the thermals even possible (though not recommended by me personally) on the stock low-profile heatsink.

That tells you that they could have significantly increased the performance of this entire line of chips but why should they when there is no competition in sight for the near future (let's ALL hope AMD really produces a winner in the next release) or we're going to be dealing with a plodding approach with INTEL for a while. In a couple months when the gap shrinks (again hopefully by a lot) they simply release a "new" batch with slightly higher turbo frequencies (no need to up the base clocks as this would only hurt power consumption with little/no upside), and bam they get essentially a "free" release.

It stinks as a consumer, but honestly probably hurts us enthusiasts the least since most of us are going to OC these anyways if purchasing the unlocked chips.

I'm still on a C2D @ 3.85GHz but I'm mainly a gamer. In a year or so I'll probably jump on the respin of SDB with even better thermals/OC potential.Reply

CPUs need to be stable in Joe Sixpack's unairconditioned trailer in Alabama during August after the heatsink is crusted in cigarette tar and dust, in one of the horrible computer desks that stuff the tower into a cupboard with just enough open space in the back for wires to get out; not just in an 70F room where all the dust is blown out regularly and the computer has good airflow. Unless something other than temperature is the limiting factor on OC headroom that means that large amounts of OCing can be done easily by those of us who take care of their systems.

Since Joe also wants to get 8 or 10 years out of his computer before replacing it the voltages need to be kept low enough that electromigration doesn't kill the chip after 3 or 4. Again that's something that most of us don't need to worry about much.Reply

There is not any problem with BIOS and 3TB drives. Using GPT you can boot such a drive either on BIOS or UEFI based system. You should only blame Windows and their obsolete MS-DOS partitioning scheme and MS-DOS bootloader.Reply

It's not exactly true that HD3000 has less compute performance than HD5450, at least it's not that clear cut.It has 12 EUs, and since they are 128bit wide, this would amount to "48SP" if you count like AMD. Factor in the clock difference and that's actually more cores (when running at 1300Mhz at least). Though if you only look at MAD throughput, then it is indeed less (as intel igp still can't quite do MAD, though it can do MAC).It's a bit disappointing though to see mostly HD2000 on the desktop, with the exception of a few select parts, which is not really that much faster compared to Ironlake IGP (which isn't surprising - after all Ironlake had twice the EUs albeit at a lower clock, so the architectural improvements are still quite obvious).Reply

My question is whether or not that chart is even right. I'm having a hard time believing that Intel would disable a feature in an "enthusiast" chip. Disabling features in lower-end cheaper chips, sure, but in "enthusiast" chips?! Unless they are afraid of those K series (but not the non-K, apparently?) cannibalizing their Xeon sales?Reply

Relatively unimportant IMHO if you're doing development. If you're running a VM/IO-intensive production workload (which isn't likely with one of these), then more important.

Remember, you need several things for Vt-d to work:1. CPU support (aka "IOMMU").2. Chip-set/PCH support (e.g., Q57 has it, P57 does not).3. BIOS support (a number of vendor implementations are broken).4. Hypervisor support.

Any of 1-3 might result in "No" for the K parts. Even though it *should* apply only to the CPU's capabilities, Intel may simply be saying it is not supported. (Hard to tell as the detailed info isn't up on Intel's ark site yet, and it would otherwise require examining the CPU capability registers to determine.)

However, it's likely to be an intentional omission on Intel's part as, e.g., the i7-875K doesn't support Vt-d either. As to why that might be there are several possible reasons, many justifiable IMHO. Specifically, the K parts are targeted at people who are likely to OC, and OC'ing--even a wee bit, especially when using VT-d--may result in instability such as to make the system unusable.

If Vt-d is potentially important to you, then I suggest you back up through steps 4-1 above; all other things equal, 4-2 are likely to be far more important. If you're running VM/IO-intensive workloads where performance and VT-d capability is a priority, then IMHO whether you can OC the part will be 0 or -1 on the list of priorities.

And while VT-d can make direct access to hardware a more effective option (again, assuming Hypervisor support), it's primary purpose is to make all IO more efficient in a virtualized environment (e.g., IOMMU and interrupt mapping). It's less a matter of "Do I have to have it to get to first base?" than "How much inefficiency am I willing to tolerate?" And again, unless you're running IO-intensive VM workloads in a production environment, the answer is probably "The difference is unlikely to be noticeable for the work [development] I do."

p.s. code65536 -- I doubt Intel is concerned with OC'd SB parts cannibalizing Xeon sales. (I'd guess the count of potentially lost Xeon sales could be counted on two hands with fingers to spare.:) Stability is far more important than pure speed for anyone I know running VM-intensive loads and, e.g., no ECC support on these parts is for me deal killer. YMMV.Reply

For as long as MS dev tools take to install, I'd really like to be able to do all my dev work in a VM backed up to the corporate lan to ease the pain of a new laptop and to make a loaner actually useful. Unfortunately the combination of lousy performance with MS VPC, and the inability of VPC to run two virtual monitors of different sizes mean I don't have a choice about running visual studio in my main OS install.Reply

So just because I want to use VT-d I'll also be limited to 6 EUs and have no possibility to overclock?

Then there's the chipset-issue. Even if I got the enthusiast targeted K-series I would still need to get the:a) ...H67-chipset to be able to use the HD-unit and QS-capability - yet not be able to overclock.b) ...P67-chipset to be able to overclock - yet to lose QS-capability and the point of having 6 extra EUs as the HD-unit can't be used at all.

Exactly my thoughts. No Quick Sync for enthusiasts right now - that's a disappointment. I think it should be stated more clearly in review.Another disappointment - missing 23.976 fps video playback.Reply

Yeah, OK, lack of support for VT-d ostensibly sucks on the K parts, but as previously posted, I think there may be good reasons for it. But lets look at it objectively...

1. Do you have an IO-intensive VM workload that requires VT-d?2. Is the inefficiency/time incurred by the lack of VT-d support egregious?3. Does your hypervisor, BIOS and chipset support VT-d?

IF you answered "NO" or "I don't know" to any of those questions, THEN what does it matter? ELSE IF you answered "YES" to all of those questions, THEN IMHO SB isn't the solution you're looking for. END IF. Simple as that.

So because you--who want that feature and the ability to OC--which is likely 0.001% of the customers who are too cheap to spend the $300-400 for a real solution--the vendor should spend 10-100X to support that capability--which will thus *significantly* increase the cost to the other 99.999% of the customers. And that makes sense how and to whom (other than you and the other 0.0001%)?

IMHO you demand a solution at no extra cost to a potential problem you do not have (or have not articulated); or you demand a solution at no extra cost to a problem you have and for which the market is not yet prepared to offer at a cost you find acceptable (regardless of vendor).Reply

General best practice is not to feed the trolls - but in this case your arguments are so weak I will go ahead anyway.

First off, I like how you - without having any insight in my usage profile - question my need for VT-d and choose to call it "lets look at it objectively".

VT-d is excellent when...a) developing hardware drivers and trying to validate functionality on different platforms.b) fooling around with GPU passthrough, something I did indeed hope to deploy with SB.

So yes, I am in need of VT-d - "Simple as that".

Secondly, _all_ the figures you've presented are pulled out of your ass. I'll be honest, I had a hard time following your argument as much of what you said makes no sense.

So I should spend more money to get an equivalent retail SKU? Well then Sir, please go ahead and show me where I can get a retail SB SKU clocked at >4.4GHz. Not only that, you're in essence implying that that people only overclock because they're cheap. In case you've missed it it's the enthusiasts buying high-end components that enable much of the next-gen research and development.

The rest - especially the part with 10-100X cost implication for vendors - is the biggest pile of manure I've come across on Anandtech. What we're seeing here is a vendor stripping off already existing functionality from a cheaper unit while at the same time asking for a premium price.

If I were to make a car analogy, it'd be the same as if Ferrari sold the 458 in two versions. One with a standard engine, and one with a more powerful engine that lacks headlights. By your reasoning - as my usage profile is in need of headlights - I'd just have to settle with the tame version. Not only would Ferrari lose the added money they'd get from selling a premium version, they would lose a sell as I'd rather be waiting until they present a version that fits my needs. I sure hope you're not running a business.

There is no other way to put it, Intel fucked up. I'd be jumping on the SB-bandwagon right now if it wasn't for this. Instead, I'll be waiting.Reply

Apologies, didn't mean to come across as a troll or in-your-face idjit (although I admittedly did--lesson learned ). Everyone has different requirements/demands, and I presumed and assumed too much when I should not have, and should have been more measured in my response.

You're entirely correct to call me on the fact that I know little or nothing about your requirements. Mea culpa. That said, I think SB is not for the likes of you (or I). While it is a "mainstream" part, it has a few too many warts..

Does that mean Intel "fucked up"? IMHO no--they made a conscious decision to serve a specific market and not serve others. And no, that "10-100X" is not hot air but based on costing from several large scale deployments. Frickin amazing what a few outliers can do to your cost/budget.Reply

I didn't have time to read all reviews, and furthermore I am not sure I will be able to express what I mean with the right nuances, since English is not my first language.

For the moment I am a bit disappointed. To account for my relative coldness, it is important to explain where I start from :

1) For gaming, I already have more than I need with a quad core 775 and a recent 6x ati graphic card.

2) For office work, I already have more than I need with an i3 clarkdale.

Therefore since I am already equipped, I am of course much colder than those who need to buy a new rig just now.

Also, the joy of trying on a new processor must be tempered with several considerations :

1) With Sandy Bridge, you have to add a new mobo in the price of the processor. That makes it much more expansive. And you are not even sure that 1155 will be kept for Ivy Bridge. That is annoying.

2) There are always other valuable things that you can buy for a rig, apart from the sheer processor horsepower : more storage, better monitor...

3) The power improvement that comes with Sandy Bridge with what I call a normal improvement for a new generation of processors. It is certainly not a quantum leap in the nature of processors.

Now, there are two things I really dislike :

1) If you want to use P67 with a graphic card, you still have that piece of hardware, the IGP, that you actually bought and that you cannot use. That seems to me extremely unelegant compared to the 775 generation of processors. It is not an elegant architecture.

2) If you want to use H67 and the Intel IGP for office work and movies, the improvement compared to clarkdale is not sufficient to justify the buying of a new processor and a new mobo. With H67 you will be able to do office work fluently and watch quasi perfectly, with clarkdale you already could.

The one thing that I like is the improvement in consumption. Otherwise it all seems to me a bit awkward.Reply

You say you want Intel to provide a $70 gpu. Well, here's a math problem for you: If the gpu on a 2600K is about 22% of the die, and the die costs $317 retail, then how much are you paying for the gpu? If you guessed $70, you win! Congrats, you now payed $70 for a crap gpu. The question is.... why? There is no tock here... only ridiculously high margins for Intel.Reply

Anand, im not the biggest intel fan (due to their past grey area dealings) but I dont think the naming is that confusing. As I understand it they will move to the 3x00 series with Ivy Bridge, basically the higher the second number the faster the chip.

It would be nice if there was something in the name to easily tell consumers the number of cores and threads, but the majority of consumers just want the fatest chip for their money and dont care how many cores or threads it has.

The ix part tells enthusiasts the number of cores/threads/turbo with the i3 having 2/4/no, the i5 having 4/4/yes and i7 4/8/yes. I find this much simpler than the 2010 chips which had some dual and some quad core i5 chips for example.

I think AMD's gpus has a sensible naming convention (except for the 68/6900 renaming) without the additional i3/i5/i7 modifier by using the second number as the tier indicator while maintaining the rule of thumb of "a higher number within a- generation means faster", if intel adopted something similar it would have been better.

That said I wish they stick with a naming convention for at least 3 or 4 generations...Reply

",,but until then you either have to use the integrated GPU alone or run a multimonitor setup with one monitor connected to Intel’s GPU in order to use Quick Sync"

So have you tested the Transcoding with QS by using an H67 chipset based motherboard? The Test Rig never mentions any H67 motherboard. I am somehow not able to follow how you got the scores for the Transcode test. How do you select the codepath if switching graphics on a desktop motherboard is not possible? Please throw some light on it as i am a bit confused here. You say that QS gives a better quality output than GTX 460, so does that mean, i need not invest in a discrete GPU if i am not gaming. Moreover, why should i be forced to use the discrete GPU in a P67 board when according to your tests, the Intel QS is giving a better output. Reply

I need to update the test table. All of the Quick Sync tests were run on Intel's H67 motherboard. Presently if you want to use Quick Sync you'll need to have an H67 motherboard. Hopefully Z68 + switchable graphics will fix this in Q2.

I think this needs to be a front page comment because it is a serious deficiency that all of your reviews fail to properly describe. I read them all and it wasn't until the comments came out that this was brought to light. Seriously SNB is a fantastic chip but this CPU/mobo issue is not insignificant for a lot of people.Reply

I haven't read through all the comments and sorry if it's been said but I find it weird that the most ''enthusiast'' chip K, comes with the better IGP when most people buying this chip will for the most part end up buying a discreet GPU.Reply

where on the right hand corner you have a Drop Down menu which has selected Intel Quick Sync. Will you see a discrete GPU if you expand it? Does it not mean switching between graphics solutions. In the review its mentioned that switchable graphics is still to find its way in desktop mobos. Reply

It looks like that drop down is dithered, which means it's only displaying the QS system at the moment, but has a possibility to select multiple options in the future or maybe if you had 2 graphics cards etc.Reply

I also take issue with the statement that the 890GX (really HD 4290) is the current onboard video cream of the crop. Test after test (on other sites) show it to be a bit slower than the HD4250, even though it has higher specs.

I also think Intel is going to have a problem with folks comparing their onboard HD3000 to AMD's HD 4290, it just sounds older and slower.

No word on Linux video drivers for the new HD2000 and HD3000? Considering what a mess KMS has made of the old i810 drivers, we may be entering an era where accelerated onboard Intel video is no longer supported on Linux.Reply

Actually, 890GX is just a re-badged 780G from 2008 with sideport memory.

And no HD4250 is NOT faster. While some specific implementation of 890GX wthout sideport _might_ be slower, it would also be cheaper and not really a "proper" representative.(890GX withou sedeport is like sayin i3 with dual channel RAM is "faster" in games than i5 with single channel RAM ...)Reply

putting the 3000 on the the 2600k and 2500k parts ALMOST made sense as an up-sell, but you can't even use their IGP when on a P series board when you're overclocking! If the Z series wont' be out for a while why the hell would i buy an overclocking chip now? so i can spend more money to replace my H series motherboard with a Z series? Nice try.

It's frustrating that you have to pick your sacrifice.... you either get the 3000 with the K sku, or you get VT-d and TXT with the standard sku. Intel doesn't have an offering with both which is kind of ridiculous for high end chips.Reply

It seems odd that the 3000 series graphics engine would be only included on a part designed for over clocking and the boards that support overclocking can't handle integrated graphics. I would have thought that the other way around would have made more sense.

In any case the 2600K and 2500K look like great value parts and are just what I was waiting for!Reply

Does anyone know if QuickSync will appear on LGA-2011 chips? I know they aren't going to have the general purpose GPU components, but this is enough of a performance booster that I'd think Intel would want to keep it on their high end consumer platform in some fashion.Reply

This functionality will likely appear in Sandybridge Xeons for socket 1155. Intel *generally* segments the Xeons by core count and clock speed, not by feature set like they do for consumer chips. The other feature Intel is holding back is ECC which should be standard in socket 1155 Xeons.Reply

It's a hardware security feature. It's best known for the Trusted Platform Module; an on board cryptographic device used in some corporate computers but not used in consumer systems. Probably they just want to keep people from building high end secure servers with cheap, overclocked K parts instead of the much more profitable XEONs for 2-3x as much.

Only to the extent that like all intel Core2 and later systems it supports a TPM module to allow locking down servers in the enterprise market and that the system *could* be used to implement consumer DRM at some hypothetical point in the future; but since consumer systems aren't sold with TPM modules it would have no impact on systems bought without.Reply

Thanks for adding Visual Studio compilation benchmark. (Although you omitted the 920).It seems that not even SSD, nor can better processors do much for that annoying time waster. It does not matter how much money you throw at it.

I wish to see also SLI/3-way SLI/crossfire performance, since the better cards frequently are CPU bottlenecked. How much better it does relative to i7 920? And with good cooler at 5Ghz?

Note: you mention 3 video cards on test setup, but what one is on the benchmarks?Reply

You're welcome on the VS compile benchmark. I'm going to keep playing with the test to see if I can use it in our SSD reviews going forward :)

I want to do more GPU investigations but they'll have to wait until after CES.

I've also updated the gaming performance page indicating what GPU was used in each game, as well as the settings for each game. Sorry, I just ran out of time last night and had to catch a flight early this morning for CES.

I wonder how this CPU scores with SwiftShader. The CPU part actually has more computing power than the GPU part. All that's lacking to really make it efficient at graphics is support for gather/scatter instructions. We could then have CPUs with more generic cores instead.Reply

Great review as always, but on the HTPC page I would have wished for a comparison of the deinterlacing quality of SD (480i/576i) and HD (1080i) material. Ati's onboard chips don't offer vector adaptive deinterlacing for 1080i material - can Intel do better?

My HD5770 does a pretty fine job, but I want to lose the dedicated video card in my next HTPC.Reply

Thanks a ton Anand for adding a compiler benchmark. I spent the vast majority of my time on builds and this will help me spec out a few new machines. It's interesting to see results indicating that I should not go anywhere near a low-end Sandybridge system, and that a lot of cheap AMD cores might not be a bad idea. Reply

Can't believe the 23.976Hz output bug is still in SB after all this time. Several years ago, the G35 had this issue and Intel proclaimed they'll have a fix for it. Subsequently, G45 still had the problem and even the iCores, but SB? C'mon....it's a big issue for HTPC buffs, because there's too much judder from 1) LCD displays 2) 3:2 cadencing from film to video conversion, so 1:1 (or rather 5:5 for most 120Hz sets) was a must for large screen HPTC setups. Yes, the bitstreaming is good and all, but most folks are content with just 7.1 DD/DTS output. I guess we'll have to wait (again) for iB and cling on to my ol' nVidia 9300 for now. :( Reply

Was just looking at the pictures that are downloadable and comparing and notice a couple of differences. Maybe they are just a driver tweak but I thought I remember ATI and/or nVidia getting slammed in the past for pulling similar tactics.

The first thing I notice was when comparing the AA shots in COD. It appears that maybe the Sandy Bridge graphics isn't applying AA to the twigs in the ground. Or is this just an appearance thing where Intel might have a different algorithm that causing this?

The second is a little more obvious to me. In the Dirt 2 pictures I notice that Sandy Bridge is blurring and not clearly rendering the distance objects. The sign to the right side is what caught my eye.

One last thing is the DAO pictures. I've seen someone (in the past) post up pictures of the same exact place in the game. The quality looks a lot better then what Anand has shown and I was wondering if that is correct. I don't have the game so I have no way to confirm.

As always Anand I appreciate the time you and your staff take to do all of your articles and the quality that results. Its just one of the reasons why I've always found myself coming back here ever since the early years of your website.Reply

Anand,Great review as always, I love the in depth feature analysis that Anandtech provides.

Bios updates have been released for Gigabyte, Asus, and Intel P67 boards that correct an internal PLL overvolt issue that was artificially limiting overclocks. Users in the thread over at HWbot are reporting that processors that were stuck at 4.8 before are now hitting 5.4ghz.http://hwbot.org/forum/showthread.php?t=15952

Would you be able to do a quick update on the overclocking results for your chips with the new BIOS updates?Reply

In the quick sync test I missed a comparison with x264, that is currently the fastest and highest quality encoder for H.264, on an fast CPU. For example, using the presets superfast and very slow (one for speed with reasonable quality, the other for quality with reasonable speed). Also, with an too high bitrate, even the crapiest encoder will look good...

I also wanted to see how low you can undervolt an i5-2400 when it has hit the overclocking cap, and how is the power consumption then. The same for the other locked CPUs would be cool too. Also, what is the power consumption of the sandy bridge CPUs running the quick sync hardware encoder?Reply

Wow, what a SLAP in AMD's face! The idea they nursed for gazillion years and were set to finally release somewhere this week is brought to you, dear customer, first to the market, with a sudden change in NDA deadline to please you sooner with a hyperperformer from Intel. Who cares that NDAs make an important play in all planning activities, PR, logistics and whatever follows - what matters is that they are first to put the GPU on-die and this is what the average Joe will now know, with a bit of PR, perhaps. Snatch another design win. Hey, AMD, remember that pocket money the court ordered us to pay you? SLAP! And the licence? SLAP! Nicely planned and executed whilst everyone was so distracted with the DAAMIT versus nVidia battles and, ironically, a lack of leaks from the red camp.I just hope Bulldozer will kick some assess, even though I doubt it's really going to happen...Reply

If AMD didn't put a steel toed boot into their own nuts by blowing the original 09Q3 release date for fusion I'd have more sympathy for them. Intel won because they made their launch date while the competition blew theirs by at least half a year.Reply

With the unlocked multipliers, the only substantive difference between the 2500K and the 2600K is hyperthreading. Looking at the benchmarks here, it appears that at equivalent clockspeeds the 2600K might actually perform worse on average than the 2500K, especially if gaming is a high priority.

A short article running both the 2500K and the 2600K at equal speeds (say "stock" @3.4GHz and overclocked @4.4GHz) might be very interesting, especially as a possible point of comparison for AMD's SMT approach with Bulldozer.

Right now it looks like if you're not careful you could end up paying ~$100 more for a 2600K instead of a 2500K and end up with worse performance.Reply

The 2500K is faster in Crysis, Dragon Age, World of Warcraft and Starcraft II, despite being clocked slower than a 2600K. If it weren't for that clockspeed deficiency, it looks like it also might be faster in Left 4 Dead, Far Cry 2, and Dawn of War II. Just about the only game that looks like a "win" for HT is Civ5 and Fallout 3.

The 2500K also wins the x264 HD 3.03 1st Pass benchmark, and comes pretty close to the 2600K in a few others, again despite a clockspeed deficiency.

Intel's new "no overclocking unless you get a K" policy looks like it might be a double-edged sword. Ignoring the IGP stuff, the only difference between a 2500K and a 2600K is HT; if you're spending extra for a K you're going to be overclocking, making the 2500K's base clockspeed deficiency irrelevant. That means HT's deficiencies won't be able to hide behind lower clockspeeds and locked multipliers (as with the i5-7xx and i7-8xx.)

In the past HT was a no-brainer; it might have hurt performance in some cases but it also came with higher clocks that compensated for HT's shortcomings. Now that Intel has cut enthusiasts down to two choices, HT isn't as clear cut, especially if those enthusiasts are gamers - and most of them are.Reply

I don't ever watch soap operas (why somebody can enjoy such crap is beyond me) but I game a lot. All my free time is spent gaming.

High frame rate reminds me of good video cards (or games that are not cutting edge) and the so called film 24p reminds me of the Michael Bay movies where stuff happens fast but you can't see anything, like in transformers.

Please don't assume that your readers know or enjoy soap operas. Standard TV is for old people and movies look amazing at 120hz when almost all you do is gaming.Reply

Just want to say thanks for such a great opening article on desktop SNB. The VS2008 benchmark was also a welcome addition!

SNB launch and CES together must mean a very busy time for you, but it would be great to get some clarification/more in depth articles on a couple of areas.

1. To clarify, if the LGA-2011 CPUs won't have an on-chip GPU, does this mean they will forego arguably the best feature in Quick Sync?

2. Would be great to have some more info on the Overclocking of both the CPU and GPU, such as the process, how far you got on stock voltage, the effect on Quick Sync and some OC'd CPU benchmarks.

3. A look at the PQ of the on-chip GPU when decoding video compared to discrete low-end rivals from nVidia and AMD, as it is likely that the main market for this will be those wanting to decode video as opposed to play games. If you're feeling generous, maybe a run through the HQV benchmark? :P

Thanks for reading, and congrats again for having the best launch-day content on the web.Reply

In the Quantum of Solace comparison, x86 and Radeon screens are the same.

I dug up a ~15Mbit 1080p clip with some action and transcoded it to 4Mbit 720p using x264. So entirely software-based. My i7 920 does 140fps, which isn't too far away from Quick Sync. I'd love to see some quality comparisons between x264 on fastest settings and QS.Reply

Also, in the Dark Knight comparison, it looks like the Radeon used the wrong levels (so not the encoder's fault). You should recheck the settings used both in the encoder and when you took the screenshot.Reply

One thing I miss is clock-for-clock benchmarks to highlight the effect of architectural changes. Though not perhaps within the scope of this review, it would nonetheless be interesting to see how SNB fairs against Bloomfield and Lynnfield at similar clock speeds.

Now sandy bridge for ~200 $ targets on amd's clientel. A Core i5-2500K for $216 - that's a bargain. (included is even a 40$ value gpu) And the overclocking ability!

If I understood it correctly: Intel Core i7 2600K @ 4.4GHz 111W under load is quite efficient. At 3.4 ghz 86 W and a ~30% more 4.4 ghz = ~30% more performance ... that would mean it scales ~ 1:1 power consumption/performance.

Many people need more performance per core, but not more cores. At 111 W under load this would be the product they wanted. e.g. People who make music with pc's, not playing mp3's but mixing, producing music.

But for more cores the x6 Thuban is the better choice on a budget. For e.g. building a server on a budget intel has no product to rival it. Or developers - they may also want as many cores as they can get for their apps to test multithreading performance.And Amd's also scores with their more conservative approach when it comes to upgrading e.g. motherboards. People don't like to buy a new motherboard every time they upgrade the cpu.Reply

If I want to spend every year a big lot of money on something I'll sell on eBay at half price a few months later and if I'd like crappy quality images on my monitor, then I would buy Sandy Bridge... but sorry, I'm no no brainer for Intel.Reply

It really impressed me as I do a lot of video transcoding and it's extremely slow on my triple-core Phenom II X3 720, even though I overclocked it to 4GHz. But there is one question: the acceleration needs EU in the GPU, and GPU is disabled in P67 chipset. Does it mean that if I paired my SNB with a P67 motherboard, I won't be able to use the transcoding accelerator?Reply

Not talking about SNB-E this time, I know it will be the performance king again. But I wonder if Bulldozer can at least gain some performance advantage to SNB because it makes no sense that 8 cores running at stunning 4.0GHz won't overrun 4 cores below 3.5GHz, no matter what architectural differences there are between these two chips. SNB is only the new-generation mid-range parts, it will be out-performed by High-End Bulldozers. AMD will hold the low-end, just as it does now; as long as the Bulldozer regain some part that Phenoms lost in mainstream and performance market, things will be much better for it. Enthusiast market is not AMD's cup of tea, just as what it does in GPUs: let nVidia get the performance king and strike from lower performance niches.Reply

I don't think we'll know until AMD releases Bulldozer and Intel counters (if they do). Seems the SNB chips can run significantly faster than they do right now, so if necessary Intel could release new models (or a firmware update) that allows turbo modes up past 4GHz.Reply

They are already selling in Malaysia, but if you don't live in Malasia then your are SOL :) ... I see rumors around that the NDA was suppose to expire on the 5th with retail availability on the 9th... I was thinking about making the leap, but think I will hold off for more info on BD and Sk2011 SB.Reply

Intel has essentially shoot itself in the foot this time. Between the letters restrictions, the new chipset and crazy chipset differentiations between a P and a H its crazy.Not to mention they lack USB 3.0, ability to have an overclock mobo with integrated graphics and the stupid turbo boost restrictions.

I'll go even more and say that the I3 core is pure crap and while its better than the old core I3 they are essentially leaving the biggest market the one up the $200 dollars wide open to AMD.

Those who purchase CPU's at $200 and higher have luck in the 2500 and 2600 variants, but for the majority of us who purchase cpu's bellow $200 its crap.

Essentially if you want gaming performance you buy I3 2100, but if you want overall better performance go for a phenom II.

Hopefully AMD comes up with some great CPU's bellow the $200 range that are going to be with 4 cores, unlimited turbo boost and not locked. Reply

It seems that these benchmarks test the CPUs (cores) and GPU parts of SandyBridge separately. I'd like to know more about the effects of the CPU and GPU (usually data intensive) sharing the L3 cache.

One advantage a system with a discrete GPU is that the GPU and CPUs can happily work simultaneously without largely affecting one another. This is no longer the case with SandyBridge.

A test I would like to see is a graphics intensive application running while another another application performs some multi-threaded ATLAS-tuned LAPACK computations. Do either the GPU or CPUs swamp the L3 cache? Are there any instances of starvation? What happens to the performance of each application? What happens to frame rates? What happens to execution times?Reply

To me it seems that marketing is defining the processors now in Intel rather than engineering. This is always the case but I think now it is more evident than ever.

Essentially if you want he features that the new architecture brings, you have to sell out for the higher end models.My ideal processor would be a i5-2520M for the desktop: Reasonable clocks, good turbo speeds (could be higher for the desktop since the TDP is not that limited), HT, good graphics etc. The combination of 2 cores and HT provides a good balance between power consumption and perfromance for most users.

Its desktop equivalent price-wise is the 2500, wich has no HT and a much higher TDP because of the four cores. Alternatively, maybe the 2500S, 2400S or 2390T could be considered if they are too overpriced.

Intel has introduced too much differentiation in this generation, and in an Apple-like fashion, i.e. they force you to pay more for stuff you don't need, just for an extra feature (eg. VT support, good graphics etc) that practically costs nothing since the silicon is already there. Bottomline, if you want to have the full functionality of the silicon that you get, you have to pay for the higher end models.Moreover, having features for specific functions (AES, transcoding etc) and good graphics makes more sense in lower-end models where CPU power is limited.

This is becoming like the software market, where you have to pay extra for licenses for specific functionalities.I wouldn't be surprised if Intel starts selling "upgrade licenses" sometime in the future that will simply unlock features.

I strongly prefer AMD's approach where all the fatures are available to all models.

I am also a bit annoyed that there is very little discusison about this problem in the review. I agree that technologically Sandy Bridge is impressive, but the artificial limiting of functionality is anti-technological.Reply

Man, this is awesome, my wallet is trying to hide, but it won't do it any good...

I took the jump to AMD when Phenom II arrived, a friend of mine bought my C2D E7400 system, and already then I regretted when I was done building. There's no two ways about it, Intel systems - if they aren't the absolute low-end - runs so much smoother.Which seems to be the case again, even at a reasonable price.

There's one thing about the review I don't really understand: "...Another Anandtech editor put it this way: You get the same performance at a lower price..."

Has he read the review?

As far as I can see, you get pretty much more performance at a lower price.Reply

I think this might be an error in your chart -- the last one on page 3 shows a Y for the i3-2100 in the AES-NI column. I would love to have this feature on an i3 CPU, but the following paragraph states "Intel also uses AES-NI as a reason to force users away from the i3 and towards the i5" which leads me to believe that i3 doesn't have said feature.

I have to disagree with Anand; I feel the QuickSync image is the best of the four in all cases. Yes, there is some edge-softening going on, so you lose some of the finer detail that ATi and SNB gives you, but when viewing on a small screen such as one on an iPhone/iPod, I'd rather have the smoothed-out shapes than pixel-perfect detail.Reply

I started my computing days with Intel but I'm so put off by the way Intel is marketing their new toys. Get this but you can't have that...buy that, but your purchase must include other things. And even after I throw my wallet to Intel, I still would not have a OC'd Sandy Bridge with useful IGP and Quicksync. But wait, throw more money on a Z68 a little later. Oh...and there's a shiny new LGA2011 in the works. Anyone worried that they started naming sockets after the year it comes out? Yay for spending!

I'm a little confused why Quick Sync needs to have a monitor connected to the MB to work. I'm trying to understand why having a monitor connected is so important for video transcoding, vs. playback etc.

Is this a software limitation? Either in the UEFI (BIOS) or drivers? Or something more systemic in the hardware.

What happens on a P67 motherboard? Does the P67 board disable the on die GPU? Effectively disabling Quick Sync support? This seems a very unfortunate over-site for such a promising feature. Will a future driver/firmware update resolve this limitation?

Xenos I think in the end is still a good two, two and a half times more powerful than the Radeon 5450. Xenos does not have to be OpenCL, Direct Compute, DX11 nor fully DX10 compliant (a 50 million jump from the 4550 going from DX10.1 to 11), nor contains hardware video decode, integrated HDMI output with 5.1 audio controller (even the old Radeon 3200 clocks in at 150 million + transistors). What I would like some clarification on is if the transistor count for the Xenos includes Northbridge functions..............

Clearly PC GPUs have insane transistor counts in order to be highly compatible. It is commendable how well the Intel HD 3000 does with only 115 Million, but it's important to note that older products like the X1900 had 384 Million transistors, back when DX9.0c was the aim and in pure throughput, it should match or closely trail Xenos at 500 MHz. Going from the 3450 to 4550 GPUs, we go up another 60 million for 8 more SIMDs of a similar DX10.1 compatible nature, as well as the probable increases for hardware video decode, etc. So basically, to come into similar order as the Xenos in terms of SIMD counts (of which Xenos is 48 of it's own type I must emphasize), we would need 60 million transistors per 8 SIMDs, which would put us at about 360 million transistors for a 48 SIMD (240 SP) AMD part that is DX 10.1 compatible and not equipped with anything unrelated to graphics processing.

Yes, it's a most basic comparison (and probably fundamentally wrong in some regards), but I think it sheds some light on the idea that the Radeon HD 5450 really still pales in comparison to the Xenos. We have much better GPUs like Redwood that are twice as powerful with their higher clock speeds + 400 SPs (627 Million transistors total) and consume less energy than Xenos ever did. Of course, this isn't taking memory bandwidth or framebuffer size into account, nor the added benefits of console optimization.Reply

I'm still on E6600 + P965 board. Honestly, I would upgrade my video card (HD3850) before doing a complete system upgrade, even with Sandy Bridge being so much faster than my old Conroe. I have yet to run a game that wasn't playable at full detail. Maybe my standards are just lower than others.Reply

Though SB will be great for some applications, there are still rough edges in terms of the overall platform. I think it will be best to wait for SNB-E or at least the Z68. SNB-E seems to be the best future-proofing bet.

I also wonder how a part rated for 95W TDP was drawing 111W in the 4.4GHz OC (the Power Consumption Page). SB's power budget controller must be really smart to allow the higher performance without throttling down, assuming your cooling system can manage the thermals.Reply

Anand, Thanks for the great schooling and deep test results -- something surely representing an enormous amount of time to write, produce, and massage within Intel's bumped-forward official announcement date.

Here's a crazy work-around question:

Can I have my Quick Synch cake and eat my Single-monitor-with-Discrete-Graphics-card too if I, say:

It makes me SO angry when Intel does stupid shit like disable HT on most of their CPU's even though the damn CPU already has it on it, they already paid for. It literally wouldn't cost them ANYTHING to turn HT on those CPU's yet the greedy bastards don't do it.Reply

The HD Graphics 3000 performance is pretty impressive, but won't be utilized by most. Most who utilize Intel desktop graphics will be using the HD Graphics 2000, which is okay, but I ran back to the AMD Brazos performance review to get some comparisons.

In Modern Warfare 2, at 1024 x 768, the new Intel HD Graphics 2000 in the Core i3 2100 barely bests the E-350. Hmm--that's when it's coupled with a full-powered, hyper-threaded desktop compute core that would run circles around the compute side of the Brazos E-350, an 18w, ultra-thin chip.

This either makes Intel's graphics less impressive, or AMD's more impressive. For me, I'm more impressed with the graphics power in the 18w Brazos chip, and I'm very excited by what mainstream Llano desktop chips (65w - 95w) will bring, graphics-wise. Should be the perfect HTPC solution, all on the CPU (ahem, APU, I mean).

I'm very impressed with Intel's video transcoding, however. Makes CUDA seem...less impressive, like a bunch of whoop-la. Scary what Intel can do when it decides that it cares about doing it. Reply

Very disappointed in the lack of vt-d and txt on k-variants. They are after all the high end products. I also find the fact that only the k-variants having the faster GPU very peculiar, as those are the CPUs most likely to be paired with a discrete GPU.Reply

Agreed. I find the exclusion of VT-d particularly irritating: many of the overclockers and enthusiasts to whom the K chips are marketed also use virtualization. Though I don't expect many enthusiasts, if any, to miss TXT (it's more for locked down corporate systems, media appliances, game consoles, etc.).

With the Z68 chipset coming in the indeterminate near future, the faster GPU on K chips would have made sense if the K chips came with every other feature enabled (i.e. if they were the "do eveything chips").

Also, I'd like to have the Sandy Bridge video encode/decode features separate from the GPU functionality - i.e. I'd like to choose between Intel and Nvidia/AMD video decode/encode when using a discrete GPU.Reply

"perhaps we should return to just labeling these things with their clock speeds and core counts? After all, it’s what Apple does—and that’s a company that still refuses to put more than one button on its mice. Maybe it’s worth a try."

I hate to sound like the resident Mac fanboy (I'm platform agnostic) but I want to point out:

1. Apple sells by trim and display, they don't really make a big deal of the CPU (probably because they stick to low-end and midrange CPUs)

2. They have been shipping multi-button mice for nearly six years now. Come on!Reply

- gtx460 image quality definitely the worst- 6870 image quality next- quicksync/snb image quality are the best (marginally better than 6870); I did notice some color loss in the flowers behind the umbrella when I zoomed in on the quicksync picture, so I'd have to give SNB the title in terms of quality. QuickSync gets the title in terms of performance.Reply

My last Intel cpu was a prescott 2.4ghz P4 OC'd to over 3ghz... back in 2004? My last 3 main system builds all AMD.... I was thinking about going to an X6 in the near future, now I guess maybe not. My price point is pretty much $200 for the cpu + motherboard so maybe I'll have to wait a couple months.

Same here.. Very disapointed as I would have purchased a better heatsink if I knew. I guess I'll just do the install with the standard crap HS and hold off on over-clocking until I get a better one.Reply

Many of us are using older equipment. And, for those of us with limited funds it would have been nice if you would have added the Intel Q9650 and run all game benchmarks at 3.4GHz [ the speed of the 2600K], except for the default 3.6GHz speed of the X4 975BE, leave it there.

I have a QX9650 that I purchased from eBay and it does 4GHz+ with ease, in a Gigabyte P35-DS3R motherboard, even with my ancient cooler [Thermalright XP-90] that I pulled from a socket 478 motherboard [$5 adapter].

Note: I lapped the XP-90 with a slight convex shape to better use with un-lapped CPUs.

In any event, a "quick and dirty" or simple overclock would have yielded at least some usable information. To save time, no need to try to get the maximum speed from all components.

As long as the CPUs were already overclocked, you could run all benchmarks at those speeds, not just games. Many of us overclock to get more for our money.

You included the ancient Q6600 at it's slow default speed - in some of the benchmarks. Why didn't you include it in all benchmarks?

Your normal benchmark page does not include a full, or nearly full, list of games and CPUs, so, comparisons are difficult to find, example here anandtech.com/bench/CPU/62

Where does this leave those of us with older equipment that is still chugging along?Reply

Just thought I would comment with my experience. I am unable to get bluray playback, or even CableCard TV playback with the Intel integrated graphics on my new I5-2500K w/ Asus Motherboard. Why you ask? The same problem Intel has always had, it doesn't handle the EDID's correctly when there is a receiver in the path between it and the display.

To be fair, I have an older Westinghouse Monitor, and an Onkyo TX-SR606. But the fact that all I had to do was reinstall my HD5450 (which I wanted to get rid of when I did the update to SandyBridge) and all my problems were gone kind of points to the fact that Intel still hasn't gotten it right when it comes to EDID's, HDCP handshakes, etc.

So sad too, because otherwise I love the upgraded platform for my HTPC. Just wish I didn't have to add-in the discrete graphics.Reply

As i could understand from article, you have used just this one software for all these testings. And I understand why. Is it enough to conclude that CUDA causes bad or low picture quality.

I am very interested and do researches over H.264 and x264 encoding and decoding performance, especially over GPU. I have tested Xilisoft Video Converter 6, that supports CUDA, and i didn't problems with low quality picture when using CUDA. I did these test on nVidia 8600 GT and for TV station that i work for. I was researching for solution to compress video for sending over internet with low or no quality loss.

So, could it be that Arcsoft Media Converter co-ops bad with CUDA technology?

And must notice here how well AMD Phenom II x6 performs well comparable to nVidia GTX 460. This means that one could buy MB with integrated graphics and AMD Phenom II x6 and have very good encoding performances in terms of speed and quality. Though, Intel is winner here no doubt, but jumping from sck. to sck. and total platform changing troubles me.

I'm curious why bad company 2 gets left out of Anand's CPU benchmarks. It seems to be a CPU dependent game. When I play it all four cores are nearly maxed out while my GPU barely reaches 60% usage. Where most other games seem to be the opposite. Reply

Nice article. It cleared up much about the new chips I had questions on.

A suggestion. I have worked in the chip making business. Perhaps you could run an article on how bin-splits and features are affected by yields and defects. Many here seem to believe that all features work on all chips (but the company chooses to disable them) when that is not true. Some features, such as virtualization, are excluded from SKU's for a business reason. These are indeed disabled by the manufacturer inside certain chips (they usually use chips where that feature is defective anyway, but can disable other chips if the market is large enough to sell more). Other features, such as less cache or lower speeds are missing from some SKU's because those chips have a defect which causes that feature to not work or not to run as fast in those chips. Rather than throwing those chips away, companies can sell them at a cheaper price. i.e. Celeron -> 1/2 the cache in the chip doesn't work right so it's disabled.

It works both ways though. Some of the low end chips must come from better chips that have been down-binned, otherwise there wouldn't be enough low-end chips to go around.Reply

Problem is we need to choose between using integrated GPU where we have to choose a H67 board or do some over clocking with a P67. I wonder why we have to make this option... this just means that if we dont do gaming and the 3000 is fine we have to go for the H67 and therefore cant OC the processor..... Reply

The benchmarks against the AMD processors are useless. All the compare is core-to-core performance (4 core to 4 core). You should be comparing is comparably priced processors/systems. For example, the 6-core AMD 1090T costs a hundred dollars less than the i7 2600 at newegg.com, yet your benchmarks fail to provide any comparative benchmarks. It's quite possible that for some applications, that the 6-core AMD may perform better than the more expensive i4-core 7 processors in your benchmarks.Reply

Anand says, "frequency domain (how often pixels of a certain color appear)," but this definition of the frequency domain is incorrect. Frequency domain in the case of video is a 2 dimensional discrete cosine transform of the frame. It is not a count of pixels like a histogram (binning) or anything.Reply

We hope you find our satisfaction guarantee to be the foundation of a long standing relationship with us. We look forward to hearing from you and are always available by telephone or email if you have any questions about our products!!Reply

Helo is there, which has made great posts like this ... I'm making a review about HP Pavilion p7-1030 Desktop Computer on the blog I http://www.bestdealscomputers.net which uses Intel Core I3-2100 as a processor ... thank you very much for your review on this processor, a lot of which I could get, I can make my writing materials later. Once again, thank you.Reply

I am having Intel HD 3000 - Sandy Bridge in my system and i was willing to get the game called "oil rush" but then i have found weird response for the game here http://www.futurehardware.com/pc-gaming/288.htm , so i just wanted to know is there any one who have tested the Intel HD 3000 - Sandy Bridge for oil rush, any help for this will be highly appreciated.Reply

I've got intel hd graphics 3000 and according to this forum/review it has a prob running dawn of war 2 on low graphics... i have it set to max graphics and i runs a dream... same with a lot of games i play on it... Reply

guys, is it safe to overclock the Intel HD 3000 GPU ? I own a 2500K CPU. I can overclock the GPU to 1450mhz and it looks stable . But i dont know how to read the temperature from the GPU unit, so iam afraid i could burn my GPU/CPU .Reply

hi.first off all sorry for my english.I have a doubt .I have seen the dells lap top.they are identical but one have the Intel Core i3-2350M 2.3GHz ,the odher is Intel Core i5-2450M 2.5GHz , and the third have Intel Core i7 -2670M 2.4GH

the prices is 600 $,670$ and 800 $,I am working some live multi channel audio production and .net teh programing.So for wich one i soud go.ThanksReply

I recently got this processor. It is ultimate for gaming.However in my windows CPU meter gadget, i can see only 2 cores functioning. Stock comes with unlocked multiplier afaik. But here in my system, it shows only 2 cores. Is there any way to activate all the cores for better performance?

I am new to this website, and I was wondering if you could help me out. I am looking to get a new laptop and I am looking to get into graphic design in about a year and a half. My current laptop is crapping out so I was looking for get a laptop that would be suitable for the graphic design as well as everyday computing. I am looking at getting this : http://promos.asus.com/US/ZENBOOK/i/. I am a student currently still so I really like the thin sleek design. I see that this laptop has a Intel HD Graphics 3000 graphics card and it also has an SSD Hard Drive. Just want to make sure before I make an investment that this laptop would be sufficient for Graphic Design type work.

I was wondering how Intel Quick Sync might impact PC Based Security Systems/CCTV like those from Avermedia or Geovision. For the longest time Aver advocated a dedicated graphics card but now says HD2000/3000 CPU is OK.

I read about limited software support in the article and guess that Aver does not yet take advantage Quick Sync. However, I had to RMA a NV6480 just for compatibility with a Sandy Bridge CPU (even using a dedicated GPU - ATI 5000 for multiple monitors) and wondered why.

Anyone know why Sandy Bridge might cause compatibility issues with DVR/NVR Cards and what advantages Quick Sync could bring to the IP Security Camera market if top companies like Geovision or Avermedia developed software for it?Reply