What makes these new Opterons truly intriguing is the fact that they will offer user-configurable TDP, which AMD calls TDP Power Cap. This means you can buy pretty much any CPU and then downscale the TDP to fit within your server’s power requirements. In the server market, the performance isn’t necessarily the number one concern like it is when building a gaming rig. As all the readers of our data center section are aware, what really counts is the performance per watt ratio. Servers need to be as energy efficient as possible while still providing excellent performance.

John Fruehe (AMD) states, "With the new TDP Power Cap for AMD Opteron processors based on the upcoming 'Bulldozer' core, customers will be able to set TDP power limits in 1 watt increments." It gets even better: "Best of all, if your workload does not exceed the new modulated power limit, you can still get top speed because you aren’t locking out the top P-state just to reach a power level."

That sounds too good to be true: we can still get the best performance from our server while we limit the TDP of the CPU. Let's delve a little deeper.

Power Capping

Power capping is nothing new. The idea is not to save energy (kWh), but to limit the amount of power (Watt) that a server or a cluster of servers can use. That may sound contradictory, but it is not. If your CPU processes a task at maximum speed, it can return to idle very quickly and save power. If you cap your CPU, the task will take longer and your server will have used about the same amount of energy as the CPU spends less time in idle, where it can save power in a lower p-state or even go to sleep (C-states). So power capping does not make any sense in a gaming rig: it would reduce your fps and not save you any energy at all. Buying CPUs with lower maximum TDP is similar: our own measurements have shown that low power CPUs do not necessarily save energy compared to their siblings with higher TDP specs.

In a data center, you have lots of servers connected to the same power lines that can only deliver a certain amount of current at a certain voltage (48, 115, 230 V...), e.g. amps. You are also limited by the heat density of your servers. So the administrator wants to make sure that the cluster of servers never exceeds the cooling capacity and the amps limitations of the power lines. Power capping makes sure that the power usage and the cooling requirements of your servers become predictable.

The current power capping techniques limit the processor P-states. Even under heavy utilization, the CPU never reaches the top frequency. This is a rather crude and pretty poor way of keeping the maximum power under control, especially from a performance point of view. The thing to remember here is that high frequencies always improve processing performance, while extra cores only improve performance in ideal circumstances (no lock contention, enough threads, etc.). Limiting frequency in order to reduce power often results in a server running far below where it could in terms of performance and power use, just to be "safe".

Idk, after years of AMD cpu domination, Intel was more than happy to let everyone talk about Conroe, benchmark it to the public. So much so that it drove up prices of the things when they finally were released.

The reverse is true now, and I just don't see the same enthusiasm from AMD on Bulldozer. Maybe these 8-cores will be on par with 2600K ?? But Intel is still holding onto 6-core Sandy Bridge.

Me thinks AMD has another Phenom on its hands. Big, low clock speeds, weaker than expected performance. Eventually, AMD is going to have to improve the performance of it's cores, not just keep adding more crappy ones.Reply

i think it's quite clear to everyone that amd went on a whole new level with this design, soo much so that it is even hard to understand how much core the processor actually has. like JF amd said people buy processors not "cores". so if bulldozer die size is smaller than a sandy bridge and use less or equal transistors then amd made the better processor. what we have to look at now is not cores anymore amd could have split 1 large core into 3 instead of 2 and we would be hearing the same arguments, about 3 core vs a single core. what we need to watch is how both companies used the real estate of the die, and who used less and accomplish more made the better cpu.and i fully expect an intel processor to copy bullldozer in the near future. cross licensing sucksReply

Cliff notes:1. Don't run the Intel 320 SSD in any machines that needs perfect reliability or any kind of mission critical software.2. Back up all data on current drives immediately.

I post it here so maybe some Anandtech guy can address the issue since they seem to be unaware of this for some months reported issue.

Concearning reliability of the the Intel SSD 320 (and perhaps the 510 too).

Huge number of complete data losses for users.Intel finally admits the problem exists.

Power failure, instant shut downs causes the issue.

No reliable information about if it is a firmware issue, design problem(bad design), hardware problem(controller etc, at least running this spec).A simple firmware update is most likely to solve the issue eventually

Erik

--------------------------------------

"“Be wary of the new Intel SSD 320 series. Currently, there's a bug in the controller that can cause the device to revert to 8MB during a power failure. AFAIK they have not yet publicly announced it, and won't have a firmware fix ready for release until the end of July.”"

It's what TDP stands for, and I don't think it's in the article. (it's the amount of heat, measured in watts, that must be dissipated by the heatsink to keep the CPU operating safely). I had to stop reading on page two and leave AT.com to go find the answer. Please explain your acronyms... it's really annoying to read about something and feel too dumb for the article. , and it's never a good idea to give readers a reason to leave your website. :)Reply

The sad part about the reality we currently face is there really hasn't been a large increase in CPU performance since the Nethalem launch nearly three years ago.

AMDs release of the Phenom II line kept them in it, as they were able to offer lesser performance, but at far less cost. SandyBridge changed all that. While again, it doesn't really perform that much better than the high end Nethalem's launched three years ago, or that much worse than the 990x, it is far cheaper than those $999 price tags. SandyBridge by performing as good as the old high end chips and being priced much lower really eroded any reason at all to buy/build an AMD based system at the enthusiast level.

Bulldozer, with a street price reported to be around $300 needs to be faster than SandyBridge and needs to launch sooner in Q3, rather than in Q4 (October). If it is only on parity, then the reality would be that AMD was finally able to develop a chip that matches the performance Intel had three years ago. With Ivy Bridge, the successor to the high end throne, set to ascend in Q1 2012 would it then take AMD another three years to match that performance? Seems as though they are falling further and further behind. But, this is all speculation. I suppose we'll see what tomorrow brings.Reply