Posted
by
samzenpus
on Monday November 14, 2011 @08:52AM
from the the-new-kids dept.

MojoKid writes "Today marks the release of Intel's Sandy Bridge-E processor family and its companion X79 Express chipset. The first processor to arrive is the Core i7-3960X Extreme Edition, a six-core chip manufactured using Intel's 32nm process node that features roughly 2.27 billion transistors. The initial batch of Sandy Bridge-E CPUs will feature 6 active execution cores that can each process two threads simultaneously via Intel Hyper-Threading technology. Although, the chip's die actually has eight cores on board (two inactive), due to power and yield constraints, only six are active at this time. These processors will support up to 15MB of shared L3 Intel Smart Cache and feature integrated quad-channel memory controllers with official support for DDR3 memory at speeds up to 1600MHz, as well as 40 integrated PCI Express 3.0 compatible lanes. Performance-wise, Sandy Bridge-E pretty much crushes anything on the desktop currently, including AMD's pseudo 8-core FX-8150 processor."

Because it actually provides context on how much better (or not how much better) that $1000 processor is. Plus, how many other desktop $1000 processors are out there to benchmark against? Certainly nothing from AMD.

Look at the article above this one about the 16 core AMD processors. If people are going to compare $1000 processors against $300 ones why not take a wander into server space? The goal posts have already been unfairly moved so why not give them a bit more of a nudge into Xeon and Opteron space?

1) The 2500/2600k CPUs that are the high end for the consumer boards. The question there is "What do I get moving up to the much more expensive E series?"

2) The top of the line AMD Bulldozer. The question there is "How much faster is Intel's high end than AMD's high end?"

3) The previous Intel high end, the i7-990X. The question there is "How much faster would it be if I upgraded?"

In all cases, you are talking a very high priced, over spec'd part. There are no other chips in its category really. It is for people who demand the max performance and aren't concerned with the stiff price premium to have it.

They are benchmarking the topen end desktop part of the current generation* against both thetop end desktop parts of the previous generation (990x), the upper-mainstream of the current generation (2700K) and the best chip the competitor can come up with (FX-8150). What else are they supposed to compare it with?

It is a bit dissapointing is that they don't have the 3930K in the test, it should be only slightly slower thant the 3960x while being a lot cheaper. From what I can gather this is because intel didn'

Honestly, with a $500+ entry tag plus cooler which is not included plus expensive, low volume motherboard you might want to compare to a dual processor Xeon machine rather than other desktops for some alleged server/workstation stability too. Performance was as expected, 6 cores to 4 so it's faster in well-threaded workstation applications, not that different otherwise.

What's disappointing is the platform, no USB 3.0, two SATA 6 Gbps ports, no SAS support, it seems like PCI express 3.0 made it in but no cards support it yet so there's nothing besides the processor that really screams high end. Well that and 8 memory slots if you feel 4x4GB isn't enough but there's alternatives like the old high end it replaces with 6 slots or 8 GB sticks that have been showing up lately - pricey but you can get 4x8GB for less than one of these CPUs. Don't get me wrong, it's the undisputed performance king but it's like the same car with a souped up engine and fuel system yet none of the features that say this is a $100k Ferrari.

This is being positioned as a hobbyist platform, same as LGA1366. The affordable E-series (i7-type) Xeons don't boot on consumer-class motherboards and don't have chipset support SMP though. These guys are the only game in town for people who want to stick three video cards in something and get a top notch CPU to go with it.

no hobbyist will spend $1000 for a cpu+cooler for these performances. pointless. leave aside lack of a lot of major stuff like usb 3 et al.

and, not 3, but 4 video cards in crossfire or sli will not require this kind of computing power. even if you shove in 2 x 6990s in crossfire, which make 4 top-rate gpus put into 2 cards. apparently you dont know this enthusiast field, so dont bullshit about it.

no hobbyist will spend $1000 for a cpu+cooler for these performances. pointless. leave aside lack of a lot of major stuff like usb 3 et al.

No, but someone who calls themselves a hobbyist, but is actually a moron will. These parts are intel going "look, we made the E5 Xeons, by the way, if you're a moron and want to hand us cash, please buy the desktop variants".

head to overclock net and see if hobbyists will spend anything on this. you wont find anyone who breaks overclock records or does custom water cooling spending $1000 on this. only fanboys with brand loyalty. that is normal.

your brother is better off with a dual socket solution and amd opterons if he is doing anything that serious. which could come even cheaper than this intel setup and provide multiples of performance. if he isnt doing that already, then he doesnt know shit, and your argument is null.

So that's AMD's marketing approach these days "Anyone who doesn't buy our systems is worthless, a fanboy and we'll deride them, and hope they buy our systems when we've insulted them enough"? And in regards to overclockers, do you really think they are the only hobbyists? Seriously? And even then, many overclockers who know what they are doing will buy them and make use of them.

For his work, the 3930 will beat dual Opterons, because the computational tasks are not that easily parallellized, so you need stro

These are Intel's "enthusiast" parts which generally means "people with too much money". Some people want the highest end performance, price is no issue. Intel is happy to stick a hose in their pockets and siphon out the cash.

That's also why these came after the regular SB parts. Intel full and well knows that for 99.99% of people a standard SB is more than plenty and they'd like to have something economical.

In terms of the other things you'd like to see, USB3 and more SATA 3 will probably be coming but Int

Well you can get 8-12 cores with one of those BUT they are previous-generation cores (the dual-socket variant of LGA2011 isn't out yet) and you have to pay through the nose to get a decent clockspeed. The only people I know of who have purchased dual xeon workstations have done so for the ram support.

What's disappointing is the platform, no USB 3.0, two SATA 6 Gbps ports, no SAS support, it seems like PCI express 3.0 made it in but no cards support it yet

But you have far more lanes. Afaict LGA2011 has 40 lanes from the processor. So even if PCIe3 doesn't pan out you can have two graphics cards running at 2.0 x16 and still have room for a nice LSI sas controller

Tri-gate transistor technology (up to 50% less power consumption)
PCI Express 3.0 support
Max CPU multiplier of 63 (57 for Sandy Bridge)
RAM support up to 2800MT/s in 200MHz increments
Next Generation Intel HD Graphics with DirectX 11, OpenGL 3.1, and OpenCL 1.1 support
The built-in GPU is believed to have up to 16 execution units (EUs), compared to Sandy Bridge's maximum of 12.
The new random number generator and the RdRand instruction, which is codenamed Bull Mountain.
Next Generation Intel Quick Sync Video
DDR3 low voltage for mobile processors
Multiple 4k video playback

So yeah, just hang on for the die shrink if you care about performance and power consumption. My next system will definitely be Ivy Bridge based.

I'm then going to buy an i5 2500K chip and Z68 motherboard for pennies, load it up with yesterday's memory, and have a system which will last me another 5 years. It's worked well with my existing nForce 680i / Core2Quad Q6600 setup.

The Z68 isn't likely to go down in price any, it is the chipset for the IB. The IBs are drop-in replacements for SB processors, same board, same chipset, and all that. In terms of RAM, same deal. They'll still use DDR3. Now that doesn't mean you'll pay much for it, DDR3 is dirt cheap, like $100 or less for 16GB of high quality RAM, but it won't be any cheaper on account of new RAM coming out (RAM is also cheapest when it is in the most production, not because of new tech).

"I bet you'll have a hard time finding anyone running integrated graphics in a home built machine, certainly not one running a top-tier CPU in it."

As a kernel developer I have scripts running all day that compile new kernels, boot VM's to them, and run some tests. The faster the better. I wouldn't say no to lower utility bills either even if it's not the first thing I look at.

But in any case, for graphics, the integrated stuff is fine. And Intel's involvement in Linux graphics development has meant their

Integrated graphics is all well and good for the mobile market, but I bet you'll have a hard time finding anyone running integrated graphics in a home built machine

I have a number of machines running integrated graphics for numerous reasons. Least of which is the kids machines - you sure as hell don't need a discrete graphics card to play stupid little flash games and do homework.

Machines that are for checking email and surfing the web... don't really need discrete graphics either.

I think the biggest reason is possibly that the integrated graphics can function as your primary adapter when the the power of the dedicated cards is unnecessary, reducing power consumption and heat overall. Certainly its not going to win any graphic wars, but its more than enough to run Aero efficiently. I suppose power consumption is not on the top of the list of most peoples concerns when buying something like this, but its nice to have the option to do so.

16 EUs. Woah. For comparison: nVidia and AMD's current cards have over 1500 of them.

An EU ~= 2 shaders so 32 to 1500, but yes...

Why would anyone with such a extreme setup ever care for such shitty integrated graphics?

Ivy Bridge isn't what Intel considers "extreme", it's their mainstream processor. As such it'll go into plenty corporate desktops and other places that want CPU power but not to play games or for casual games. Less and less people get a discrete graphics card, because the Intels don't suck quite as bad as they used to, their market share is now about 60%. But yes, for this discussion it wasn't very good selling points. OTOH, if you game the Sandy Bridge-E doesn't

Actually, nVidia's currently have 512 of them per chip, and AMD currently have around 400. AMD like to claim they have 1600 by counting the number of scalar units instead of the number of vector units.

Intel's current (Sandy Bridge) IGP is pretty damn good –it holds its own against integrated offerings by AMD and nVidia. Ivy Bridge is expected to put them right in the mix.

This is all nice and well, but are there any sites that actually benchmark this CPU under Linux, running some stuff not compiled with intel compiler? AFAIK most of the benchmark software is running on windows is compiled with ICC, and ICC cheats- it disables most optimizations on non-intel CPUs.

How about some linux developer workload? Compile times? IDE performance? Java performance? PHP, Apache, PostgreSQL, MySQL performance? KDE/Gnome performance? CAD/CAM? Matlab or Octave? Bzip2/gzip/SSL/zip under Linux? I know some of these workloads depend on IO/graphics more than on CPU, but I'd like to see results anyway. And I'm sick and tired of reviews that run some Intel compiled synthetic benchmarks and then some games that primarily use GPU anyway. Phoronix is guilty of that as well- they should have more WORK workloads and less FPS counts for games. But at least they are trying- and Bulldozer performance under Linux/GCC isn't that bad compared to Intel CPUs as it is under Windows/ICC.

As far as I remember, some benchmark software (I think Passmark family) is compiled with ICC. If not benchmarks themselves, then a lot of windows system libraries that are used by benchmarks are compiled by ICC. I haven't verified this myself, as I'm not that interested in synthetic benchmarks most review sites use. I should get my hands on some of benchmarking tools and verify what compiler was used to build them- compilers usually leave some

That's the number of frame per second that the MULTITHREADED VIDEO ENCODER was being able to encode. All of the provided benchmark on that page were about video encoding using multithreaded encoders, and the new proc was beating all others easily. That's why I asked if it was the wrong link perhaps?

No kids ? No wife ? No video camera perhaps ? Or just not getting out of your basement ? Encoding 1080p video takes a while, and I should soon start encoding dual stream (3d) 1080p.. That kind of power would be welcome for me (sporting an i7-960 on the computer running the video tools). Anyway, that's the original poster link target..

oh yeah. instead of encoding in the time that it takes for me to go get a cup of tea and back, now my encoding will be complete by i reach the door to wc while passing through the corridor from the kitchen to the living room. yes. that totally justifies shelling out $900 to encode my home videos.

Some people are not bothered by that 900$. Some people will buy the outrageously priced video cards for the limited benefits. You don't need that 80" LCD TV either, but it's nice to have. Shaving 10-15 minutes off my encoding when I want something is nice to have too. Beside, there's much more to be done on a computer that will benefit from this processor. Think about compiling & linking, where the time saved directly correlate to money in the pocket. Not that I would buy this processor, but there's def

Some people are not bothered by that 900$. Some people will buy the outrageously priced video cards for the limited benefits.

i am one of those 'some people', and i participate in communities that are populated by those people, and noone will buy something that provides only 15-20% performance, but burns a small oven and is priced at a fucking $900.

you sir are incorrect, i haven't found the specs online, but only blu-ray devices support 1080p and i was pretty sure 3-d didn't need dual progressive scans but rather two interlaced streams that form a 1080p image. but the wiki on it claimed it uses 50% more overhead suggesting dual 1080p streams. but as the 3ds shows us 3-d doesn't have to mean high definition.

besides 3d is a fad, with 18% of the population unable to watch 3d (seizures) and many more complaining of headaches it's not likely to be widespre

Side By Side (SBS) encoding is often done with two full quality streams. You can pipe it to two projectors (with polarizing filters) and I know that some TVs can accept the stream too (mine do, at least). I'm not sure on what you meant about the fact that only blu ray devices support 1080p, but I've been running a triplehead setup of 2560x1600 screens since 2007, which makes it 6 times higher in resolution than a 1080p stream. Screen caps are huge, so much that I am not recording much of them anymore.

Click the link and read please before commenting. The FPS is multithreaded performance. It's refering to how many frames per second it can encode video, and PIBM is right. There is a huge difference between the two processors in those tests, eye balling it, it looks like approximately 42-50% improvement on each of them.

What I see is the 2600k, the 4 core $300 chip, matching or beating the Bulldozer, and the 3960X beating everything by a decent margin.

You are correct in that the Bulldozer doesn't have much to worry about from the new E series as they are much higher priced and compete in a different market. What it does have to worry about is the regular SB chips, which are killing it. Even when things are stacked in what should be its favour: Heavily threaded tasks, the SB does as good or better. Then if you take many other tasks that are not as multithreaded, the SB pulls way ahead.

THAT is the BD's big problem... Well that and the fact that the Ivy Bridge comes out in a few months. The E series is just for people with too much money. In the consumer market, the regular SB is an amazing performer.

Cool cherry picking, bro. Too bad the 2600k spanks it silly in the vast majority of other tests. Keep clinging to one or two benches as proof of performance while the rest of us laugh at you. And you can't hit 5ghz on air with BD in typical cases. It'll eat enough power to run a small town. A 2600k can do that well enough without blowing your power bill through the roof.

It's always funny seeing AMD fanboys desperately clinging to Faildozer. Most of them had sense enough to realize what a joke it was.

So close = beats BD silly? Because I'm seeing it whomping BD in those benches, beating it by over 50% in some cases! Did you even read your own link? And you can astroturf all you want. It doesn't change the fact that BD's single-threaded performance suck dead goat ass.

Apple uses workstation class components for their Mac Pros. In terms of Xeon CPUs, I'm not sure how Intel is going to handle it. They've had SB Xeons for awhile now, the E7-8830 is an example. They have some features you see in the new SB-E chips, some that you see in the normal SB chips. They are also a different socket from either. I don't know if they plan a separate Xeon line or not.

At any rate, Apple is likely to stick with Xeons, that is just how they do things for better or worse. When will they do i

The Core i7-3960X Extreme Edition finished well ahead of the second-place Core i7-990X

I don't see any benchmark placing the 3960 more than 10% faster than the 990. How can 10%, and under, be "well ahead"? The FPS tests are all under 2% in favor of the 3960. $1,000 + Motherboard upgrade for 2%? With the Icy Bridge you will get a die reduction. This means at least you will get a power consumption drop.

1) SATA 6 and USB3 is typically on the mobo not using up PCIe anyway2) The thing is, the number of machines not owned by enthusiasts that actually use these other things is tiny. Are you seriously going to spend thousands on a PCIe SSD, but not be willing to shop out for a decent CPU too? I have no idea who actually uses a TV tuner any more with on-demand services being infinitely more convenient, and thunderbolt is a) something that should be integrated into the graphics card anyway b) something which on

Trying to future proof an Intel based motherboard is pointless. Considering this new socket replaces the LGA 1155, which is less than a year old, in an ever decreasing release interval, I would estimate this socket will be obsolete by next spring, summer at the latest.

It doesn't replace LGA1155 it replaces LGA1366, which is 3 years old. These chips are server and workstation level chips.

Intel's next desktop architecture is called Ivy Bridge, will be released in the first 3 months of next year, and will be using LGA1155. Ivy Bridge E will use LGA 2011. Only in about 20 months time (by which point LGA1155 will be 3 years old) will Haswell come out on a newer socket.

LGA2011 really replaces LGA1567 for the Xeon MPs. Intel's on-again, off-again planned replacement for LGA1366 is actually LGA1356. However, it seems like Intel may somewhat simplify their socket lineup and just have LGA1155 for general uniprocessor platforms and LGA2011 for performance UP and all multprocessor applications.

As someone with a decent investment in LGA1366 stuff, I'd rather play it smart and keep everything on the "mainstream" LGA1155 for anything but the six core CPUs. The motherboards are harder to find and substantially more expensive for the dubious value of having some extra PCIe lanes and a couple extra DIMM slots.

I'm in the process now of selling off my LGA1366 machines while they still have value and replacing them with Xeon E-series equipment.

When buying hardware, trying to future proof is dumb. You could try to "future proof" now and buy a $1500 system. In 3 years it'll be shit though.

Alternatively, you could buy a $600 mediocre system now, and another $600 system in 2 years that'll be faster than the above $1500 one. The result will be that you've spent $300 less, you've got machines that are reasonably current for 4 years, and the system you get out at the end is faster.

The result will be that you've spent $300 less, you've got machines that are reasonably current for 4 years, and the system you get out at the end is faster.

You're also sending twice as much to the garbage pile.

I know this isn't a consideration for most, and it's all but encouraged through the new "disposable electronics" thing that's crept up over the last decade, but at some point we need to consider that some considerations extend beyond the financial, even when talking about buying consumer goods.

For instance, I know people that buy a new printer every time their starter ink runs out because it's still cheaper than buying replacement ink cartridges. Three times a year they're throwing a perfectly good printer into the trash. Yeah, it saves them money, but does that really make it right to throw it in a landfill? I have a hard time saying yes.

Maybe if we required manufacturers to subsidize the disposal of their goods when such goods are non-biodegradable it would help do something to eliminate the whole "designed for the dump" phenomenon?

The result will be that you've spent $300 less, you've got machines that are reasonably current for 4 years, and the system you get out at the end is faster.

You're also sending twice as much to the garbage pile.

I know this isn't a consideration for most, and it's all but encouraged through the new "disposable electronics" thing that's crept up over the last decade, but at some point we need to consider that some considerations extend beyond the financial, even when talking about buying consumer goods.

For instance, I know people that buy a new printer every time their starter ink runs out because it's still cheaper than buying replacement ink cartridges. Three times a year they're throwing a perfectly good printer into the trash. Yeah, it saves them money, but does that really make it right to throw it in a landfill? I have a hard time saying yes.

Maybe if we required manufacturers to subsidize the disposal of their goods when such goods are non-biodegradable it would help do something to eliminate the whole "designed for the dump" phenomenon?

A machine that was bleeding-edge two years ago is still quite powerful today for the majority of people out there. Also, not every server in the rack has to be equal. There are plenty of less-demanding but still important roles that two year old machine can fill when it is kicked down a notch. I'm sure the "weakest link" hardware can be put to good use elsewhere when upgrade time rolls around. I know I consider the mobo-CPU combo as a unit now, rather than thinking "I can upgrade the CPU later". Maybe I can and maybe I can't, but it doesn't matter that much. So long as I have room to boost RAM and storage, I can extend the useful life of the hardware a great deal. It just may not be my fastest, l33t3st system any more. At worst, I can give the machine away -- my 3 year old secondhand hardware is generally as good as most people would buy off the shelf new, and I already have a good idea what it does best.

Only if you actually throw the machines away. If anything, it's the reverse. If you upgrade components of a machine, it's much harder to find a use for the bits you remove than if you replace the whole machine. A two year old machine may be underpowered for you, but there are lots of people who can use it for another few years.

What kind of moron would throw a still semi-decent computer in the trash? Sell it for $250 to recoup some of the loss. And who says it would end up in a landfill? I don't know about you, but my state has a recycling program. I can drop it off at Goodwill and it gets recycled.

In this instance, no, this is a new socket, replacing LGA1366. Next year though intel will release new desktop CPUs based on the current LGA1155 for desktops.

Intel actually don't release things on new sockets as much as people think. Every tick/tock pairing has one desktop socket, and one server socket, this is the server socket to go with LGA1155's desktop socket. The tock to come (Ivy Bridge) will also use LGA1155 for desktops and LGA2011 for servers.

This is much the same as has happened before: Nehalem introduced LGA1156 and LGA1366, westmere reused them; Conroe introduced (properly) LGA775 and LGA771, Arandale reused them.

Which means the motherboard is good for exactly one generation on upgrades, on the same die size. Who'd seriously upgrade their almost-new Nehalem system to Westmere, Conroe to Arandale or Sandy Bridge to Ivy Bridge? Then you'd do better putting all that money into one processor. Socket compatibility is only good if it lasts long enough there good reason to upgrade. With Intel, I assume that any new processor I buy will require a new motherboard, simple as that.

It ain't (necessarily) an Intel conspiracy to force you to fork out more cash for a new socket when an old one might have worked, since they'd simply be leaving the field open to AMD if that were the only reason (AMD's greatest selling point is the ability to leverage on previous generations). The reasons are technical - when a die undergoes die shrinks, there is also less area for the same number of signals, making it pad limited. Also, die shrinks need more power and ground pins even as the real estate

You got those backwards. 775 and 771 were the old sockets, spanning from late P4 all the way up to Core 2. Socket 1156 was for first-gen i3/i5, ans Socket 1366 was Nehalem (i7-9xx and extreme). They have never reduced the pin count, because each successive platform has introduced wider memory controllers and/or QPI/DMI/PCIe lanes. Sandy Bridge-E is no different, moving up to 4-channel DDR3, up from X58's 3 channels. That right there is an extra couple-hundred pins just for the memory.