Tech Preview 2012: The PC's Bright Future

Let's Talk Sockets

A to-scale comparison of LGA2011, LGA1155, and AM3+

LGA2011Yes, Intel's new LGA2011 is really that huge and even has two arms to lock it down. It has to be big, given all the memory channels and PCIe lanes it must support.

LGA1155Surprisingly, Intel will stick with the LGA1155 socket through at least the new Ivy Bridge chipset next year instead of pushing it overboard.

AM3+AMD's new AM3+ socket really just adds beefed-up voltage guidelines for board makers, so many late-model AM3 boards may work with the FX CPU.

Ivy Bridge Cometh

Bringing a 22nm process, 3d transistors, and more

Intel has a reputation as the master of the process, and that likely won’t change early next year when the company introduces its new Ivy Bridge chip. The most significant aspect of Ivy Bridge is the move to a new 22nm process using 3D transistors. These transistors literally use three dimensions, and if Intel’s bet pays off, could offer very significant power reductions and higher performance dividends on CPUs. Ivy Bridge itself is only considered a “tick,” Intel speak for small step forward. That’s describing the x86 side of the chip, which won’t offer huge changes. On the GPU side, Intel says it will introduce a major step forward in graphics performance and add DirectX 11 and support for OpenCL. Ivy Bridge chips will also offer the ability for OEMs to dial thermal performance up or down (it’s not known if end users can change this yet).

Ivy Bridge will also bring new FMA3 instructions, a new digital random number generator to enhance security, and improved power management. Also on tap is support for PCIe 3.0.

The really good news is that Ivy Bridge will be backward compatible with current LGA1155 motherboards. PCIe 3.0 won't be supported in all slots on all boards (although some vendors say they’re ready to go with PCIe 3.0) and will require a BIOS update to run. Paired with Ivy Bridge will be the new Panther Point, or 7-series, chipset that will finally bring native USB 3.0 to Intel CPUs.

Benchmarkapalooza

For our testing, we built four machines to test the various chips and tried to balance them as closely as possible. Each test station had a stock-clocked GeForce GTX 580, the same public Nvidia drivers, 64-bit Windows 7 Professional, and matching WD Raptor drives. Why not SSDs? We’ve seen occasional SSD performance variance among chipsets, leading us to believe that an HDD is the more reliable option for comparison. (We have, however, tested several of our benchmarks with current-gen SSDs to ensure they weren’t being bottlenecked by disk I/O). One area where our configs diverged was in RAM, due to the channel differences. The dual-channel and quad-channel systems featured 8GB of DDR3/1600, while the tri-channel had 6GB of DDR3/1600. The difference is very unlikely to impact our benchmarks, as none cross the 6GB boundary.

For the Phenom II and FX-8150, we used an Asus Crosshair V Formula motherboard. With the FX-8150, we used a specific UEFI developed by Asus for testing, while a public UEFI was used with the Phenom II X6. The Core i7-2700K was tested with a Gigabyte GA-Z68X-UD3H-B3 board, the Core i7-990X with an Intel DX58SO2 Smackover 2 board, and the Core i7-3960X blew away everyone from the socket of a new Asus P9X79 Deluxe board.

And that’s really the upshot of all this. In the Intel three-way showdown, we figured the Core 7-3960X would give us only a slight boost over the Core i7-990X. Instead, the newer hexa-core demolished its older sibling in just about every test. The 3960X really shined in multithreaded tests, with encoding taking 20 percent less time. Using Sony Vegas 10, we saw the 3960X kick out a 27 percent faster encode. 3D rendering saw increases from 13 to 27 percent. And that pesky Core i7-2600K, which occasionally beats the pricier 990X? The 3960X put it in its place in most of our tests or ran dead even with it. In actual per-core performance, Sandy Bridge and Sandy Bridge-E were about the same, but the extra cache, additional memory bandwidth, and four more threads make the 3960X not just a winner, but a decisive champ today. The Core i7-3960X is simply the fastest CPU we’ve ever tested and puts red meat back on the menu for PC enthusiasts.

That doesn’t negate the value of the 2600K and new 2700K, though. Systems using those chips are far cheaper to build, offer more than enough performance, and have a solid upgrade path. So how do you pick? Generally, we’re sticking to our recommendation that if you do 3D rendering, video editing, or other workstation tasks for a living, the 3960X is a must-have CPU. If you also find yourself encoding a lot of media, those extra cores are well worth it. However, if you primarily game and don’t get paid by the hour to render video or perform other processing-intensive tasks that need the cores, the 2600K/2700K is still a killer value—for now. That’s likely to change when Intel releases its quad-core version of the Sandy Bridge-E. If the part is price competitive, it might simply make more sense to build on that chip, which has a better upgrade path for an enthusiast.

And what about AMD? To be fair, we don’t think the FX-8150 should be compared to the new 3960X or the 990X, as those chips cost four times as much. But what about the 2600K? Even there, the FX-8150 has a tough time and can get beaten pretty badly by Intel’s second-fiddle Sandy Bridge. AMD actually thinks the eight-core FX-8150 is a better match with Intel’s Core i5-2500/2500K parts (a 2600K with less cache and no Hyper-Threading). How meaningful that is really depends on how you view the glass. In one way, it’s great that AMD finally has a part that is at least competitive with some of Intel’s higher-tier Sandy Bridge CPUs. But seen differently, how good is it that after all this time and a major redesign, the best AMD can do with an octo-core CPU is compete with a cheaper Intel quad-core chip? We know that for people who only pay attention to core counts (like they did megahertz), the sound of eight cores is really appealing. But with GPU and CPU cores starting to blur, does it really matter how many “cores” you have? Just as we once had to keep in mind that a 2.13GHz Athlon XP could kick the crap out of a Pentium 4 clocked 1GHz faster, perhaps we have to stop looking at CPUs in terms of cores but instead look at, well, the model number.

It’s not all downer news for AMD, though. We saw several signs of great performance with the new chip. Up against the Phenom II X6, the FX-8150 offers a serious boost in performance in several encoding tests. In fact, in many encoding tests where the Phenom II X6 is road kill, the FX-8150 offers, umm, Sandy Bridge-like performance. In fact, in our MainConcept test where we only do one-pass rendering, the FX-8150 mangles the vaunted 2600K. In other tests, such as POV Ray, Bibble, and HandBrake, the FX-8150 pulls pretty damn close, too.

As sad as some AMD fans will be that Bulldozer doesn’t flatten Sandy Bridge, it’s probably as close as AMD has been in some years.

Comments

Software and apps/peripheral wise, at 2012 we are getting to the point where the "ecosystem" is rendering substantial new functionalities: Parrot-style Skype following you while you walk around the house by the TV, Kitchen, etc. Think of voice differenciating, allowing for data to be shown on a screen at a work meeting depending on speaker commands whoever she/he is, or Excel filling data by voice and touch combinations... Yes, Excel and touch seem like a perfect match, the same way WORD and Voice will also easily fit.

Technologies like handwriting and voice recognition, speak to text, facial and partial body recognition, motion and voice activated features, bio and NeuroControl. are all already mature to have existing products on the shelves:http://www.gamesradar.com/mindwave-a...r-now-on-sale/ To show the rarest. But to hint the functionalities now at reach, better read on some new functions of recently released 2nd Kinect SDK, of which apps are starting what seems to become a BIG Wave: *Sound source localization for beamforming, which enables the determination of a sound’s spatial location, enhancing reliability when integrated with the Microsoft speech recognition API. *Depth data, which provides the distance of an object from the Kinect camera. *Highly performant and robust skeletal tracking capabilities for determining the body positions of one or two persons moving within the Kinect field of view.

What we expect in Windows 8 is an OS that actually facilitates the interconnection and merge of all these techs, allowing for new functions andvast efficiency improvements into one environment applicable to multiple devices, at the center of will be the PC and around it; phones, house appliances, TVs, sound systems, electrical everything, etc (connected via small wireless IP little thingies).

To me this is not just creating an ecosystem, its allowing for the Evolution of Era in control and IT funcionalities. This is not just about fast and compatibility. it is about radical new functions and ways of doing the old and new.

////One good example of how this evolution comes to be possible, is The Touchable Hologram. It has been in our fantasies since the 70s, and several attempts have been tried by adding soundwaves, air pressure shaping and who knows what else to "densify them". However now http://www.mobiledia.com/news/113457.html Technicians left holograms in their physical band while adding Kinect and CPU to it, allowing for precise in air control from users wanting to interact...with whatever an hologram conveys... This is the kind of new capacities found with the interaction of several techniques, instead of trying to force certain technology, device or software, to do it all by itself.

articles were written saying the minicomputer would never displace mainframes. It did and lots of companies died. Then decades later, articles were written how the pc would never displace the mini. It did. more companies disappeared. Now articlea are being written about how tablets and mobile devices won't displace the PC. what are the chances this time they are right?

ironically Andy Grove was known for saying only the paranoid survive and his take on historic inflection points and here Intel sits on the wrong side of It along with the other hlf of the once WinTel monopoly.

Professor Clay Christensen is well known fornhis theories explaining this phenomenon . entrenched companies ignore the disruptivs technology in the beginning because It's not really seen as a serious threat and their customers aren't really considering It in any meaningful way. I then they wake up one day and the whole world changes and they are on the outside looking in. I to have that happen to Intel of all companies just shows It can happen to anyone.

Articles were also written about how BluRay would fail, because it's Sony, and 'remember what happened with Betamax?' How many other things have come, been hailed as the next big thing, but didn't?

The thing that's different between mainframes, minicomputers, and PCs, is that PCs are based on open standards, and are a mass market item, not restricted to businesses. There will always be a market for a high performance device (PCs), as evidenced by people taking up PC gaming and leaving their 360s to red ring.

Not my cup of tea though, built a rig about 6 months ago w/ a 2500k (nothing fancy but lotta bang for the buck). I'm very happy with it, there are not many programs that I can't run with top notch graphics and settings.

So while these new chips will in all likelyhood shred my current setup, for me personally I see no reason to get all antsy when I'm not using anything that will require that kind of power anyways...

I don't plan on swapping my i7 2600 out anytime soon. We still need better multithreaded and 64 bit apps (I'm looking at you, Skyrim). Might pick up one of those GPUs, if I absolutely want 60 fps on highest.

Again, killer article guys. Nice to know whats's coming down the pipe in the next year or two. What I'd really like to see is SSD's reach practical price points in comparison to traditional platter drives. They have gotten SOOOOO much cheaper since their inception, but the prices still need to come down. I'm hopeful I can get a 1TB drive within the next couple of years without spending a small fortune.

Hell, the 480gb drives cost nearly half of what I built my entire rig for. For desktop storage, that is ludicrous.

Well before the end of 2012 we'll see more versions of mainstream laptops able to run at three point whatever ghz times six; for a combined rating at about 20. Now if I could only convince Apple to offer user interchangeable and hot-swap batteries in their devices

To be honest , the Xeons will destroy ALL of these "DESKTOP" chips, you should look into the 8 core Xeons that are as of yet unreleased (I.E. this years versions of the chips that were in the dream machine a few years back) as usual they will smoke the desktop chips, be 8 cores and generally have 2 QPI's so you can use 2 chips in 1 system UNLIKE the single QPI (meaning they cannot "talk" to other CPU's) that the "desktop" chips have!