Posted
by
kdawson
on Tuesday June 09, 2009 @03:52PM
from the two-bakers-dozens dept.

theraindog writes "The number of different CPU models available from AMD and Intel is daunting to say the least. The Tech Report's latest CPU review makes some sense of the landscape, exploring the performance and power consumption characteristics of more than two dozen desktop processors between the $999 Core i7-975 and more affordable sub-$100 chips. The article also highlights the value proposition offered by each CPU on its own and as a part of the total cost of a system. The resulting scatter plots nicely illustrate which CPUs deliver the best performance per dollar."

Skidpad is only a measure of static lateral grip - slalom speed is probably a better performance metric there. Corvettes are fast and cheap (and feel like it too), but I prefer things like the Lotus Elise.

Not to mention that for most folks that "ludicrous speed" is simply overkill. I've been building my customers a mix of Pentium Duals and AMD 7550s and all they can talk about is how blazing fast they are. Because for most of the stuff folks are doing today, even games, the CPU is rarely the bottleneck and passed "good enough" a few years back.

So at least for my customers and myself (I liked how the last AMD 7550 ran so quiet so I built one for myself) it is more about "bang for the buck" and when I can build a nice AMD 7550 with 4Gb of RAM, a 780 board along with a 300Gb Sata 2 and a nice black case for $281 shipped it is just nuts to blow all that extra cash in this economy to get a bigger epeen. For most of us even the bottom of the line dual cores are sitting idle a good 80% of the time, so why blow the extra cash? I just checked process Explorer and with 9 tabs plus Comodo Internet Security on XP X64 I've still got 3.2Gb of RAM free and am only using an average of 1.4% CPU, plus I got another 1Gb of RAM on the $50 HD4650 for watching videos.

Considering my first machine was a VIC20 where I had to peek and poke everything all this extra power is nice, but I doubt I'll be saying "you know what? I need more power!" for quite awhile yet. So for a box that cost a hair over $500 counting the OS and GPU I'm a happy little camper. While having ludicrous speed might be good if I was doing major CAD or graphics work, for the things that myself and my customers do with their PCs there just really isn't a point. Hell I gave my nearly 5 year old 3.6GHz P4 to my oldest and he is blasting zombies in Left4Dead even as we speak, so even that CPU is "good enough" for what he wants to do with it. The "bang for the buck" you get nowadays even on the low end is frankly unreal!

I just ordered a new motherboard that I think is compatible with my old P4 570 CPU. (It's 800MHz FSB, so it should work, I hope!)

The one thing I've learned over the years is to buy a good motherboard that lists your current CPU as one of the oldest ones it supports -- that way you've got plenty of room for future upgrades if you need them. The simple truth is that if I end up upgrading my CPU in the near future, it'll be because the new mobo isn't compatible, not because I need more speed.

That's what I did with myself and my customers. For the office guys I maxed out the boards at 4Gb, because they all have legacy apps and can't stand Vista. For myself and the home users i got boards that would take at the minimum 4Gb, most do 8Gb and mine does 32Gb;-)

But all support the latest quads. Mine can take any Phenom 2 Quad that is 95 watts, and honestly? I doubt i will ever get to the point i actually need quad. Most of the games I play are like Bioshock or Sacred Gold and simply don't need the m

Fast computers are still needed if you're actually using them for work...

Just as an example, compiling+linking a nontrivial program with profile-guided optimization (say Firefox) using MSVC++ takes several hours on decent modern hardware. I'd love for that to happen in, say, seconds instead.;)

Hey, I said for CAD/CAM and other "heavy lifting" that there was a purpose in having the ludicrous speed, I was just pointing out that for a good 80-90% of the home and SMB markets it is simply overkill as their PCs will be sitting idle most of the time due to the incredible processing power of even the lowest model AMD or Intel dual compared to the work they have for it.

BTW it is a damned shame it takes that long, or else I'd be asking you about compiling a 64bit Songbird. We have a 64 bit Firefox [mozilla-x86-64.com] which

Not embarrassing, just not as fast. However, a Ferrari is vastly more powerful for the money, but a Corvette is still the much better deal -- do I really need to explain this on here?

Yes. In my experience, computer nerds and muscle car freaks are groups that seldom overlap. As far as I'm concerned Ferraris and Corvettes the same thing, impractical expensive cars for those going through a mid life crisis. I have no idea what you are trying to convey with this analogy.

AMD would end up with even better value on low-end if the cheap AMD system was built on a mobo with integrated GFX...which is good enough for everything except recent GFX-intensive games.

Yeah, there are always Nvidia chipsets...but for some reason motherboards with them are definitely more expensive (at least where I shop) than comparable ones (both with AMD and Nv chipsets) for AMD platform.

Seconded. I have a Radeon HD3200 and an Athlon 7750 in my Mythbuntu media center, and I couldn't be happier. Plays HD video like a dream and the total system including 4GB of RAM and 4 1TB hard drives clocks in at around $600 or so, $700 with the nice Antec case I got for it.

Yeah. I'm not worrying about 3D right now in it... I'm using the open-source radeon driver that ships with 9.04 because it has tear-free xv in it. The fglrx/catalyst drivers suck donkey cock at doing video. If you're just looking for a media center or non-3D stuff, I highly recommend sticking with the open source drivers.

On the plus side, ATI is devoting some serious resources to open-source drivers. Take a peek over at the Phoronix forums [phoronix.com] for some ATI devs posting the latest and greatest, answering quest

Price/performance is slope on that graph, and AMD looks pretty good at first look, occupying a band mostly on the top-left.

They ought to have plotted performance on the independent axis vs. price/performance on the y axis. And put that sucker on a log-log plot. Then we'll see who's got the best price/performance for various performance levels.

I don't know about that, the AMD X4 955 is on par performance and price wise with the Intel Q9550. Of course, the i7 line is probably what you mean by high end, but you're paying at least $200 - $900 more for a 25-75% performance gain.

The price/performance graph makes it look like there's a big difference in price between those chips, yeh. In reality, the Q9550 is only $15 more than the X4 955, with a gap that small, I'd put that down more to choice of other components than to "wow, the AMD chip is so much cheeper for the same performance, I'd better run out and get one.

Personally I don't find it that trivial; never before has the best choice been so extremely dependent on what you need the performance for.

Say you've got some eminently parallel task, like ray tracing. With the huge price/performance difference between low end and high end you might beat the high end in performance by buying two cheap systems rather than one expensive. Look at the i7-940; it's just barely twice as fast as the cheapest CPU, yet it costs six times as much. That price would easily accommodate a cheap motherboard and memory and you'd still be shelling out less.

On the other hand, say you have very few parallel tasks, then you may still be as well off with a cheap CPU. Without several tasks you're not going so see much difference between a dual core CPU with good per-core performance and a quad core. Not an entirely rare situation when talking about desktop systems.

Or you might be in the sweet spot of latency sensitive parallel tasks, possibly applicable to some games, in which case a more expensive quad core CPU might definitely be what you're looking for.

And add to that the need for actual application specific benchmarks to determine actual performance as opposed to generic benchmarks... well, I wouldn't call the choice trivial.

Notice that we are making some assumptions here that may not be entirely valid. For instance, we've priced the Socket AM3 Phenom II processors on a Socket AM2+ motherboard with DDR2 memory, though we tested most of them with DDR3 memory. As you may have noticed, memory type didn't make much difference at all to the performance of the Phenom II X4 810

and we expect the story will be similar for the rest. In the same vein, we priced the Core 2 processors with DDR2 memory, though we tested them with DDR3. Our goal in selecting these components was to settle on a standard platform for each CPU type with a decent price-performance ratio, not to exactly replicate our sometimes-exotic test systems.

"and we expect the story will be similar for the rest."

You expect that, huh? Based on what? My experience with Intel processors has always been that they're memory starved. Core i7 may be a huge improvement, but please provide me proof. After all, DDR3 is more expensive, Core i7 CPUs are more expensive, and Core i7 boards are more expensive. It adds up to a 50% jump in cost. (~$500 -> ~$750+)

I'd like to know whether that was factored in - but the quote right there makes me think maybe it wasn't?

The only significant difference between Intel and AMD processors on the memory BW front is that AMD moved to the integrated memory controller two generations before Intel did. If your experience is that Intel processors have always been memory starved, then your experience is limited to the time between when AMD integrated the controller and when Intel integrated the memory controller.

On the question of memory BW it is that simple. The company that integrated the memory controller first was ahead on deliv

All the chips from both companies potentially get a boost from DDR3 memory over DDR2 memory. But, as the article demonstrates on the other pages, for the benchmarks they are looking at, the sensitivity going from DDR2->DDR3 is small.

I only talked about it in terms of integrated memory controllers because you brought up the idea that Intel parts have always been "memory starved". That statement is what I was contesting. Intel parts have

I only talked about it in terms of integrated memory controllers because you brought up the idea that Intel parts have always been "memory starved". That statement is what I was contesting. Intel parts have only been memory starved on some workloads, relative to AMD, when AMD had an integrated memory controller, and Intel didn't. The blanket statement that this has always been the case is not true.

You should've been fighting my conclusion - they couldn't be bothered benching with the hardware they priced out. I mentioned the i7's improvement in my first post, but I'd still like some numbers on just how big a difference it makes.

As to your point that you want to see data justifying the claims that DDR2 vs DDR3 in their study isn't all that relevant, the article provides that data. One may question the workloads they chose, but the data justifying the claim within the scope of the workloads that they did choose is there.

Not if you factor in the motherboard price difference - about $100 for x58 over AM2+. The i7 is brilliant for heavy, multi-threaded workloads but I question how much of that performance is really worth it for an average consumer or even Slashdot reader.

I'd also question its price/longevity scale as well, since the sort of folks who are going to drop $700+ on a basic (no video card, no monitor, no hd's) high performance system are probably the same sort of folks who are going to scrap that box as soon as Larrabee hits.

My AMDx2 3800+ is starting to show its age, but there is no way I am going to buy a high end part with Larrabee so close.

WHen I can I buy the highest end I can and use it for 5 years.I might upgrade a video card. After 5 years, it gets deprecated into the house hold pool.I have 5 year old computers running games like team fortress 2 and wow just fine. Not the highest end graphic performance, but fine enough.

When I build my next box, It will have 16Gigs ram, minimum. Hopefully 32. This will probably stretch it's use out to 8 years.

2 years ago I got an Athlon FX-2 6000+, a motherboard that supports 16 gigs of ram, a GeForce 8600 GTS (512MB), 4 gigs of ram, and a 256 GB SATAII hard drive for about $5-600 total and just threw it in an existing case with other peripherals. At today's prices I can upgrade the ram to 16 gigs and the vid card to something more recent pretty dang cheap.

The CPU is a bit dated but not much of what I do is CPU bound anyway. I run a lot of concurrent VMs that mostly just sit around and play a game here or t

I upgrade on the order of every four years. This means CPU, MB, RAM and graphics card. No way around it.

That being said, I plan on upgrading to i7 920. My current system is a desktop, mail-, web-, file-, print-, dns-, and dhcp-server, router, firewall, and a few more things I am forgetting. [Of course I run Linux]. My current P4 2.8Ghz (space heater that also does some computation) can't handle much when it I run a simple multi-threaded desktop app. The cpu has 1 core and two "hyperthreads". I look

high end version? do you understand intel's processor path? It's a mandatory upgrade every new socket. Maybe you might want to look into AMD if you want to not waste cash on an i7 920 which is being discontinued [bit-tech.net], by the way. Mind you, I use an i7 920, but only because I got it free. Had I not, I'd be running AMD.

I've got an AMD Athlon X2 4850e. The time before that I lasted 6 years. So far I'm having a hard time seeing the value proposition in replacing the 4850e in the foreseeable future. There is still nothing better that I can see is the best combination of performance, low power consumption and price.

In fact, it would make more sense to calculate a cost per year to own rather than the outright purchase cost, since this is what it really costs a person. Probably most

The Phenom II X4 955 beats the i7 920 in 3 out of the 4 games they tried. The only one it lost was Cryis Warhead and it was a narrow loss (48 vs. 46 FPS).

These price difference of these two chips is about $35 on Newegg. I think for gamers, getting the X4 955 and putting that extra $35 towards the video card will net better results. This isn't counting the additional cost of DDR2 vs. DDR3 memory which has minimal effect on performance right now but still has a big price difference.

I know that some people hate to hear the "yea, but if you overclock part X" argument, but here goes...

You can pick up a core 2 duo E8400 wolfdale ($168@newegg) and an arctic freezer 7 pro hsf ($37), and perform a very, very modest overclock from 3.0 to 3.33ghz for a total of $205. Hell, you could probably perform this overclock with the stock cooler with no issues and save a further $37.

Now, you have the equivalent of the $270 E8600 c2d which also rates high in their gaming benchmarks (beating the phenom ii

"Now, you have the equivalent of the $270 E8600 c2d which also rates high in their gaming benchmarks (beating the phenom ii 955, i7 920, and even the i7 940 in their hl2 and crysis warhead benches, and only slightly losing to the 955 and i7 940 in farcry 2)."

Maybe I'm missing something but the links you posted do not show the E8600 beating the 920, 940 or 955, except where it beats a 920 by less than 1 frame per second. All that tells me is that games are not CPU bound.

"Maybe I'm missing something but the links you posted do not show the E8600 beating the 920, 940 or 955, except where it beats a 920 by less than 1 frame per second. All that tells me is that games are not CPU bound."

I want full detail at 1080P60 which is the native resolution of my LCD. Unfortunately I don't want to pay for either the components or the power to achieve that. I get closer to 1080P30 with an Athlon x2 4200+HE and 9600GSO, my whole system uses about a third the power of just the graphics cards in a top end system today.

In science we call that honesty. You can make a tiny little difference look REALLY BIG by cropping your graph so that it only shows a very tiny range. By showing the origin tiny differences look, well, tiny.

When you chop out some of the "wasted space" in a graph, you distort the graph. Unless people are careful and check where your axes begin, and then mentally visualize where the axes go, they'll get a misleading idea of the data from the graph.

Suppose the bottom part of the graph was sliced off, at the 90% line, to make you happier. Imagine what it would look like. The AMD X2 6400+, sitting at the 100% line, would have very little white space under it; and the Intel i7-920, sitting a bit below the 200% line, would now appear to be ten times faster than the AMD X2 6400+. The numbers would be the same, but the visual impact would be that the Intel chip totally blows away the AMD chips.

The graph is good the way it is.

I'll meet you halfway, though: it wouldn't have hurt for them to have put in a second chart, zooming in on just the most crowded areas.

The first take away I get is that there is an actual, substantial positive correlation between cost and performance. This is a good thing. If I were in a cynical mood I would have guessed that the correlation would have been small or non-existent. The other thing to note is that there are some CPUs that by this metric are clearly just not very worth it where their are cheaper ones that perform better. So, more expensive generally means better, but not always. So CPUs are sort of like wine?

You perhaps aren't cynical enough. I wonder how much this chart would change if the chips were overclocked. That would tell you if some of the slower processors were simply underclocked versions of the faster ones.

yup, just like cars (Once upon a time, you could spend 36k USD for a pt cruiser), houses, internet subscriptions, insurance, meat, items on ebay, and just about anything that has a varied price range and is subject to advertising, brand loyalty, or outright cons.

I've always tended to get right behind the cutting edge, processors usually take a dive in price and speed, Like when i bought my processor, i paid 300 for a quad core intel last summer, the next step up was 395, then 600, then 1000, but the 1000 wasn't 3.3x faster, the 600 wasn't 2x faster, etc. However, the $150 intel chip was 40% slower, so the price/performance sort of follows a log function where the trick is finding the right place before the price starts to skyrocket in relation to performance. I'm sure that if you could find a bunch of old pentium 1 processors on the super cheap, in bulk, you might get the same price/performance as a new mid-level intel chip, but why would you want to run a beowulf cluster to get the same throughput? (Ignoring the price of all of the other components, since the article is price/cost of the cpu alone, that was the metric i went by as well).

I wonder what the price/performance ratio of a pentium 1 that cost $0.01 is.

I get your underlying point, but wine has more subjective values (wine enthusiasts would debate this though) that come into play whereas CPUs have more quantifiable comparisons. That said, for both CPUs and wine I would think that most of the people who are buying "high end" products they are a little more discerning about their purchases.

The other thing to note is that there are some CPUs that by this metric are clearly just not very worth it where their are cheaper ones that perform better.

Depends on how much you value the time it takes to complete $PROCESSOR_INTENSIVE_TASK. If I hired software engineers for $100,000 a year, that's $1 a minute and so it might very well be "worth it" to spend $1000 on the latest N-core processor (compilation being nearly infinitely parallel) if that save 5 minutes per compile over 200 compiles on the year.

The only thing the plot proves is that there are diminishing returns, not that any particular price/performance point is not "worth it".

Um, ARM CPUs really aren't high end in the least. Sure, they might be decent enough for web browsing, but tell me whenever I can get an ARM CPU that can render, play games, etc. Until then I'm sticking with x86. All ARM-based CPUs are good for is low power systems like phones, etc.

Because desktop PCs of the world DO run Intel compatible chips, and that is just how it is. Doesn't matter if you don't think it should be that way, that's how it is and that's what tech sites deal with. Windows is of course the by far dominant desktop OS with over 90% market share. Well Windows only runs on Intel and AMD chips. Most versions are x86 and x64 only, there are a few IA64 (Itanium) versions. No ARM, no Power, etc.

After Windows is MacOS. All new Macs sold for a number of years have been Intel ch

Windows CE is ballocks, it's barely acceptable for a smartphone or PDA. I have a tablet with it (WebDT 360) and it's a nightmare trying to use it as a real OS. (I'm working on building Angstrom Linux, have got a build, now need a customized kernel and X server. Fun times.)

It makes little sense to compare ARM boards. They are usually power-optimized (with peripherals consuming substantial fraction of power), not performance-optimized. Also, end-user devices might be underclocked and undervoltaged. Besides, if you want performance with non-x86 CPUs, then look at MIPS CPUs (which are usually used for things like AV processing in set-top boxes).

What is the cheapest CPU that can playback 780P flash well?That is probably a good CPU for 99% of the population. Flash is a resource hog and is likely to be most intensive thing that most people use.The next step up would be to list several games and see what is the cheapest CPU that can play them at say 60FPS at good settings with a $99 video card.

If your a video editor, hardcore PC gamer, transcode a lot of video, or run CAD get the fastest CPU you can afford.So hard core types should buy I7s and pretty much everybody else should buy AMDs once you take into account ram and motherboard prices.

Also if you are planing on running virtual machines AMD are often a better choice. Intel doesn't support virtualization on a lot of their CPUs while I think AMD does on their AM2 and up CPUs.

if all you want is 720P flash, pretty much anything on the market will do. My wifes laptop with a C2D T2080 @ 1.73GHz manages it fine. when comparing it against other desktop processors of similar performance on here: (http://www.cpubenchmark.net/cpu_lookup.php?cpu=Intel+T2080+%40+1.73GHz), and then looking it up on ebuyer.com, suggests that a Â£40 proccesor is fine for 720p flash.

That is kind of the point. If you want to step it up then go for one that supports 1080p and that will be good enough for most people.And actually I think the Atom doesn't handle 720p all that well yet.

That is the interesting thing about CPUs today. The best price to performance ratio isn't always important.If you have no need for the extra performance don't get it. As it is very few applications will use an I7 well. Most people just don't need 8 threads.

No where in the post I responded to does the word "flash" even appear.
I've got a hp mini 2140 with a 1.6Ghz Atom, 2GB RAM and the standard intel GMA 950 graphics adapter and have no problem whatsoever playing 720p encoded video via the VGA output port, including numerous h.264 encoded videos (which is the same format used by Flash [wikipedia.org])

"Flash Player 9 Update 3, released on December 3, 2007,[2] also includes support for the H.264 video standard (also known as MPEG-4 part 10, or AVC) which is even more comp

But, to answer your question, probably the new class of netbook like the Pegatron [engadget.com], which, interestingly enough is running a Freescale processor with an ARM based core. This little netbook also has flash based GPU acceleration (supposedly), is incredibly thin and sports I think 6 hours of battery life.

Do any of the CPU reviews use old CPUs? What I what to know is how much faster today's CPU is compared to my 3-6 year old CPU, but these hardware reviews typically have a low end much newer/faster than my current system. Practically, a 50% CPU edge is too marginal for me to upgrade, but if a new system was 3X faster than my current aging machine I would be tempted!

Do any of the CPU reviews use old CPUs? What I what to know is how much faster today's CPU is compared to my 3-6 year old CPU, but these hardware reviews typically have a low end much newer/faster than my current system. Practically, a 50% CPU edge is too marginal for me to upgrade, but if a new system was 3X faster than my current aging machine I would be tempted!

Tom's cpu chart is a great tool, but they don't generally compare older chips to newer ones. They also change the testing credentials from time to time, so there's no real way to directly compare old vs. new.

Anandtech has a new cpu benchmark site that compares everything from a single-core atom up to the top-of-the-line core i7. They've also recently added two pentium 4 chips to the mix so you really can directly compare them to the newer stuff. Check it out:

The performance scales sub-linearly with the price, and ends up almost flat at the extreme end. This means you need to examine the cost of SMP vs. a more powerful CPU. Two X2 6400+ chips in an SMP should give you about the same performance at the same cost as one i7-920, after you add in the extra for the upgraded chipsets and mobo.

More powerful low-end chips become more and more effective when SMPed versus their higher-end rivals. The other benefit of going SMP is that you have fewer cores sharing the same cache, therefore increase the number of distinct tasks you can perform in parallel effectively without cache-flooding.

Of course, you can't SMP forever - the largest SMP array you can make before the system slows down by more than the CPU increases performance is 16-way. Even before then, you lose linear scaling fairly early on. So you end up balancing the different CPUs against the different methods of arranging them to get the best performance for your money.

Dual and Quad socket motherboards are exceptionally expensive and typically require registered/ecc RAM. I would genuinely like to see a setup with comparable CPU features to say, a core i7 920, where you'd get comparable performance for the price using two processors. (no used prices obviously, since we need apples v apples).

> Apparently, the tech report should have benchmarked their web server before putting this article up.

It's probably more bandwidth related than computer related. When the County first put our sex offender registry online, we had a measly T1 which got saturated by the community within minutes of the local news stations carrying the announcement. Our (low-power at the time) Linux web server registered 0.00 0.00 0.00 loads most of time time. If it wasn't for the server logs expanding rapidly, I wouldn't

Its too bad the article doesn't talk about things like Execute Disable, Virtualization support, etc. For a power user audience like/. these are important considerations.

For me not being able to install Xen, or Windows 7 XP mode, etc are complete deal killers. I want CPUs with those features, especially when shopping "value CPUs".

Getting something like an E8190 is a mistake that will bite a/. power user in the ass eventually even if it is a few bucks cheaper than an E8200 and delivers the same performance, at the same wattage, etc...

I've been something of an AMD fanboy ever since the Athlon came out, but I just bought an Intel Atom 330 for a lightweight file server, and I have to say I'm thoroughly impressed. 64-bit, dual-core, virtualization extensions, and low-power to boot for around $80 which includes the motherboard. Simply unbeatable.

Also wanted to mention that these guys [cpubenchmark.net] have easy-to-read benchmark charts of a wide variety of CPUs. Certainly more than the 26 in TFA. Benchmarks don't tell the whole story of course, but it's a good start for quick-and-dirty comparison.

Honestly, it doesn't matter much any more. Except for certain things like audio and video ripping/encoding, 3D design rendering, etc., most any modern dual-core CPU will do whatever a home user needs just fine. Yes, even for gaming. I'm running a socket AM2 Athlon 5600+ and it's not been a bottleneck yet. My video card could stand to be upgraded for some games though.

So your method of choosing a CPU makes more sense than any other right now.

Criticism: they could've been $900 systems if you dropped the quads and got 500W Corsair PSUs.

The computers are only going to use around 350W at max load. Games don't utilise more than two cores (with some rare exceptions) so you could've got better real-world performance for less money with a dual core CPU. The E8400 for example.