Post Your Comment

45 Comments

I would like to see a 32nm Quad Core Westmere for Socket 1156 while retaining the on-die memory controller and more capable PCIe controller. I use a discrete graphics card and do not mind the P55 solution. These Quad Cores would be mainstream products but consume less power at the same clock, or operate at greater clock for the same power, and/or be more overclockable.

A socket 1156 Westmere Quad Core priced between $200 to $300 is useful to this mainstream user while a socket 1366 Hex Core or socket 1156 Dual Core is not useful to me. Reply

It takes several transistors to make logic gates. It takes several logic gates to make a pipeline stage. Each pipeline stage must be in sync with the core clock. The core clock is buffered and propagated across the core. If the longest path in a pipeline stage takes 15 transistors then you need transistors to be at least 45GHz for a 3GHz clock target (15x3=45). Some stages such as L2 cache access or complex division take several cycles to complete but are still in sync. Reply

Wasn't it said somewhere that the 6-core extreme would be 2.66GHz? I mean unless they really pulled off something spectacular with westmere it's hard to see 6 cores fitting in a 130W TDP over 3GHz. Reply

I think we'll have to wait for 32nm quad core desktop CPUs (Lynnfield successor) until Sandy Bridge in Q1/2011. Everything else doesn't make much sense because Intel just has or just will release updates in the other segments (32nm mobile CPUs just released, 32nm high-end desktop and server coming in march). What else could be the first Sandy Bridge chip? Reply

Even factoring in PSU conversion losses, the high idle power draw of your test systems is worrying. Is that due to component choices or turning off power saving features? If the former, it seems rather pointless to be comparing CPU power efficiency when the rest of the system is drawing inordinate amounts of power, and if the latter, why? Reply

They aren't measuring the same thing, though. the SPC review is measuring the load on the ATX 12V connector, which is the 4-pin connector just used to provide (or supplement) cpu power. The Anandtech measurements are measured at the wall, and include the PSU, motherboard, and other system devices. Reply

1. In which case it's silly to discuss (and compare) power consumption for a supposedly power efficient CPU with peripherals that take up 6-8 times as much power as the CPU, non?

2. SPCR also includes everything in its system power, except for PSU losses. And I refuse to believe that a modern CPU could be only 10% efficient at 100W.

It still does not explain why AT's test systems are drawing so much power at idle. Most modern GPUs are pretty efficient at idle these days, if it's an issue with GPU or other component choice, then the testbed needs to be updated, otherwise it is useless as a metric for CPU power efficiency. Likewise with software/drivers/windows settings. Reply

and no, it's certainly not "silly" to discuss power consumption of an entire system as it's a real world test, the i5-750 and i7-860 required a discrete card. SPCR reviewed chips with on-chip graphics, so did not need a discrete card. The i5/i7 comparison is to other chips using as close to the same systems as possible, the differences being unavoidable, i.e. you can't run an i7-920 with 2x2 GB DDR3 and the various chipsets are ALWAYS going to be running along with the processor. Reply

Actually, all the spcr tests bar one were done with a 9400GT and a Raptor HDD (actual model unspecified).

Looking at GTX285 idle power consumption across various websites, it actually seems to idle quite well, at around 30-40W - the 9400 GT idled around 10W (est) at SPCR.

That still doesn't explain the discrepancy between AT's idle figures and SPCR's. Even factoring in the GTX280/285, and including PSU losses, there's about 40-50W discrepancy in the X3 720 results, which amounts to 90-100% difference from SPCR's figures. I doubt minor component changes (RAM, HDDs) would make up so much difference, especially at idle, when their power draw is lowest.

It may be a matter of software/bios/acpi configuration settings, or it could just be that the testing methodology of the two sites is not directly comparable. But I am curious which, and why. Reply

I am wondering should i wait for Advanced Vector Extensions (AVX)?
Is it going to double the speed of video encoding?
If so then it is going to make a huge difference.
I am waiting for the intel developer forum. Probably there will be more light on the subject.
Reply

AMD is about a year behind Intel with respect to process technology, and Intel just introduced 32nm processors this January.

Judging from that, it is very likely that AMD will introduce 32nm processors in Q1/2011, and since quad core processors seem to be their bread and butter now (look at the $99 "620", for instance), it is not unreasonable to speculate that AMD could beat Intel to a 32nm quad core - but if so, only by a few weeks, of course. Reply

Aside from the server market, there are some folks that use Photoshop quite heavily; and with PS, it can use as many cores, as many threads, and as much memory as you can throw at it, and some features can still take a very long time to run. I anxiously await the day that I can have 6 OC'd cores running instead of my current 4. Reply

... which is of course bullshit. All that matters is how threaded is the application you run. If you have one-threaded application running on OS X and one-threaded application running on OS Y, they will both run only on one thread.

As well as there is no use to force multi threading at all cost. Yes, you can make your average calculator using 12 threads, but why? Totaly useless when one thread takes like 0.0000000001s cpu time. The more threads the more cpu cost to manage the threads, their synchronization, accesses and all. Going threaded is a good thing if there is a gain in doing so. Going threaded at all cost is just ego-enlargement. Reply

Intel still has to obey the laws of physics. If core 5 wants L3 cache information that's stored in the cache right below core 0, how much you want to bet it will get it later than if the information is stored right below core 5? Reply

First, from those pictures it looks like 1366 won't be getting on-board PCIe controllers. I'm guessing that having the X58 with it's own PCIe controller sitting between the CPU and the slots is the reason, but I was hoping to see this make it to the new chips.

On another note, I'm really hoping for a 32nm quad core on 1156, and soon. I'm eyeballing an i7 Mitx build, and a die shrink would help keep thermals down. Reply

(Sorry Tetrong, I accidentally reported you, MODS please do not ban or delete the post!!)

Tetrong, let's see what really happens at the simplest level using simplest math. At 10GHz frequency, light in vacuum can travel 0.3 centimetres or 30mm per clock cycle. Now if we assume the core is perfectly square, we can use pythagorean's theorem to calculate the maximum die size.

You can have a maximum die size of 21mm on each side or 441mm2 for 10GHz frequency, if EVERYTHING is perfect. That won't be true, even if the circuitry doesn't have any faults reaching such frequency. True, the core execution won't be anywhere near 400mm2, so 10GHz is possible. But as of now, we won't see it in the near future. Reply

But electrons do NOT travel at the speed of light. Going by your theory, that would make the max die size just a fraction of 441 sq. mm.

Actually it may be even smaller, because I'm quite sure electrons don't take a linear path inside the processor. But the main reasons why still don't see a 10GHz chip are completely unrelated to this. Reply

Indeed! The electrons do not travel at the speed of light and they never have to do it. This is the electromagnetic field which forces them to move and the propagation speed of the field is the speed of light (well, almost, in the dielectri?). So the calculation is valid. Reply

It's true that electrons don't travel at the speed of light. They travel at a few inches per hour through any form of conductor. Electrons are "bumped" from the rear by an entering electron. You need to think of the inside of a transistor, wire, or other conductor as a huge traffic jam. Electrons are lined up and when hit from behind by a new electron, the whole thing just moves forward slowly until the one at the front takes all the force and is shot out.

A person can calculate using a manual stopwatch how long it takes for an actual electron to make its way through a circuit. You could leave the system on all day and the first electron that entered your PC would find its way to ground by the end of the day. That's how slow electrons are.

What needs to be considered is how easy and quickly the force generated by the entering electron reaches the lead electron, not the electron movement itself. Reply

You guys have no idea what you are talking about!!! Stop with the uninformed techno babble all ready! There is not one modern processor that needs to get a signal from one end of the chip to the other in ONE clock cycle!!!!!!!!!!!! Just stop. Reply

Quite true! I believe the reason clocks are stuck around 3GHz is more to do with thermal/power usage considerations than feasibility. And if CPUs are doing more "per clock" then the clock rate isn't a particularly good metric of speed either. Can't help thinking at some point they'll run out of instruction-level parallelism and have to start upping the clocks on the executing parts again though. Reply

Intel to date has not done that with Core Counts. Defects in Cache have led to other processor derivatives. But no 3-core or 2-core Nehalem's by Intel to date, and I doubt you will see them here. Reply