Posted
by
kdawson
on Tuesday September 12, 2006 @08:50PM
from the cpu-shootout dept.

ThinSkin writes, "ExtremeTech has an extensive performance roundup across the entire line of Intel Core 2 Duo and AMD AM2 CPUs, from the cheap to the ultra-high end. Both companies bring five processors to the table, ranging from $152 to $1,075, with the mid-range CPUs boasting the best in price/performance. From the article: 'It's clear that Intel's Core 2 Duo lineup offers superior performance across the product line when compared with AMD's Athlon 64. In some applications, even a lower-cost Core 2 Duo can outperform some of the higher-end Athlon 64s.'" The ExtremeTech article is spread over 10 ad-laden pages. You can read it all on the printer-friendly page, but you'll miss out on the pretty graphs.

Ah, Competition at its finest. Although it seems right now AMD is a bit behind Intel in speed I am glad it is there. Without head to head competition with Intel and AMD Intel will probably still be pushing higher GHZ with little consideration of performance/heat and power usage. I will not be to surprised if in a year or so AMD will be faster then in a couple years Intel will be faster. As well with these to guys fighting it out the consumer wins, as the companies compete for performance and price. I would say it is best not to be in love with either Company because if this processor war is won, we the consumer will loose.

I second that. I've been waiting for today's keynote in the hopes of the core 2 duo update...but all we got were more iPods and Disney movies. Woo! I'm about to give up the wait if nothing changes by next week...anyone got any ideas?

I purposefully know as little about Macs as possible, as I hate the user mystique thing (My last Apple product was an Apple III), but I thought they would run on Intel CPU's now? I bought a HP 8230 Centrino Core Duo t2300 laptop a couple of months ago that was rated to run Vista (Core duo, Nvidia GeForce 7400 with 256MB, 1gb ram) (not that I'll ever run Vista). The sucker will run anything, and the battery lasts about 2 hours playing oblivion at full resolution. But it seems that if Mac OS will run on Intel

Every stinking intel/amd article has this same goddam statement. Who the hell is this insigtful to?

Me... I was thinking by moving to a single payer, government sponsored chip manufacturer we could eliminate wasteful overhead, advertising, executive salaries, and that irritating itch to upgrade every 6 months.

You damn Libertarians need to realize the free market isn't for everything...

Isn't Core 2 Duo very energy efficient? I think this was the most exciting aspect of Core 2 Duo. I am so sick of noisy fans, high power usage, and big clunky cases. Give me high performance, low power, and quiet. This is the future of personal computing.

Actually when you calculate performance per dollar, it is closer than most think right now. This article is comparing a $200 Intel processor to a $150 AMD processor. When you compare the $200 AMD to the $200 Intel, not only are they neck-and-neck, but in certain benchmarks, the AMD comes out on top.

Imagine that.

Perhaps those that read articles and think for themselves will see such things. Those that only read headlines and troll won't.

Intel does have a very good processor line on their hands with the Core Duo 2. Even the AMD fans admit that. No one has said otherwise. It is the Intel fans who refused to acknowledge how far they were behind for 4 years. Now both are striving to be the top-dog. AMD claims they will be the best with the 4x4 line soon, and no doubt Intel will respond with a new line of their own.

Meanwhile performance is going up considerably, and prices down at the same time. I built my AMD 3000 system two years ago, and I can't believe what you can build now for the same price.

Microsoft doesn't profit directly from DirectX. Instead, by making Windows a better platform for game development they, shock, get more game developers on Windows.

Also note that Microsoft doesn't decide what features can and can't go into a DirectX 10 card - it sets a minimum featureset for cards that want the sticker. How horrible that a card being marketed as supporting DirectX 10 has to support DirectX 10 functions. (Remember that DirectX emulates hardware functions your videocards lack, allowing games written for it to transcend specific videocards. If the videocard doesn't support any advanced texture, lighting, and whatever else features, you really have a DirectX 10 complaint CPU.)

A real game. Like Half-Life 2. Oblivion. Fear. Practically every PC game you see on the shelf in a Gamestop / Best Buy / Walmart / Wherever is written for DirectX.

The few exceptions are games that don't need the graphics power (rare; even PopCap games use DirectX with the "Hardware Acceleration" option) or some mostly older ones that use a version of OpenGL (another standard video cards try to support.)

Hey, take it easy on me pal, I was joking.Not only have played some really good (as in well developed) games but I have also used (programmed) on DirectX and OpenGL enough to know that DirectX is a really good achievement of Microsoft. I am completely aware, for example, that DirectX > OpenGL in that it provides a complete multimedia SDK (Solution is the buzz^Wkeyword), whereas OpenGL provides only the 3D component.

I have worked also with the Linux equivalents, say SDL and Allegro and in my opinion no o

SDL wraps/can wrap audio/graphics/input, basically everything you just said. On MS platforms it frontends DirectX, so it's fairly obvious that they are aiming for a cross-platform DirectX alternative. I've programmed against it and it's pretty easy to use.

Well, you know, IBM is still making chips, they're quite popular with the console market. I don't think there's any PC's or servers using them now that Apple has gone Intel. Although I'm sure that IBM still sells servers that use their own chips. Oh, and then there's SUN processors for servers and workstations. There's a lot more choice out there than you think. If your on the Windows platform, then you pretty much have just AMD and Intel, but if you're on another operating system link Linux/Unix, then

While it is true that you can buy any chip you can imagine in a server, the original poster gave me the impression that he/she wanted a cheap solution with a simple chip-on-a-board (ala PegasOS). Unfortunately, the money is in complete systems tied in with services, so that's the last thing you'll catch IBM selling.And sure, IBM's chips are popular in consoles, but that's mostly because IBM is the only major chip house that will offer to develop custom chip designs. The game console companies help fund th

Xeons prior to Woodcrest did get stomped by Socket 940 Opterons, now with Intel Woodcrest dual-core vs. AMD Socket F dual cores, Intel I believe holds similar advantage that it does in the desktop line.

Yes, AMD's pressure has pushed Intel to make a lot of changes for the better.AMD's 386DX 40 Mhz pushed Intel to release faster 486 chips...otherwise Intel would have ridden their overpriced 486DX 33 forever.

AMD and Cyrix produced Pentium clones which pushed Intel and forced them to reduce prices.

AMD's push to revive Socket 7 (Super 7) with the introduction of the 100 MHz bus and the K6-2 forced Intel to release the Mendocino Celeron. With on-die cache, it was one of the best budget gaming processors ever r

At what? That's the problem.These benchmarks you see on the web where they run "media creation tests" or whatever are not really indicative of the technology. At least with my test you know what MD5 or AES is. So when you see it double performance from Pentium M to Conroe you know Intel did something right for a change.

In pure ALU performance I'd say Core 2 and the AMD K8 core are on roughly equal footing. AMD still has faster multipliers (see the ECC/RSA benchmarks) but the Conroe has a decent ALU othe

These benchmarks measure the number of processor cycles. Therefore the clock speed is mostly irrelevant. FYI it is rather common to count cycles when speaking about the efficiency of crypto/hashing algorithms.

So what you have is essentially a synthetic synthetic benchmark. Does this count the number of cycles spent on the bus fetching memory? Hyper Transport usually gets a win there. What about pipeline fill time for a branch mis-prediction? Why is the chip being compared an Allendale?I'd be far more convinced if I saw an actual benchmark. Until we're running real code on real data and measuring the wallclock on it, we have, as I said, a synthetic synthetic benchmark. A lot of fluff. It seems kind of odd that ne

The data/code will be in at least the L2 cache. The test program isn't that big and I'm only processing the same data over and over and over [e.g. encrypt the same small blocks].

The results are very stable over multiple runs.

The point of my test is to see ALU performance over anything else. And you can see from this test that the ALU performance has gone up considerably since Pentium M [and probably the first Core processor].

I tend to buy whatever speed CPU is at the knee of the price/performance curve. I usually just calculate this based on the clock speed, which isn't perfect and somewhat naive but works quickly when all the chips are based on the same core. For example:

I'm disappointed to see that as usual the review contains no mention of 64-bit performance. Does anyone know any place that provides 64-bit benchmarks for core 2 duo?

As much as it's done for us in the last 20 years, 32-bit x86 is not the future. Linux was AMD64-ready three years ago and Windows Vista which is just around the corner already puts more emphasis on the x86-64 platform than x86. Reviewing the 32-bit performance of core 2 duo is like reviewing Pentium processers based on 16-bit performance. Let's get some forward looking reviews instead of backward looking reviews, please!

64bit benchmarks???!? how many 64bit applications are you running/ are there in usful production?

everyone's got the 64bit crazy Ive had 64bit technology a long time can you say risc?

This is clearly a troll post, since you denigrate the availability of 64-bit computing in your first paragraph and then contradict yourself by claiming you jumped on the 64-bit bandwagon before everyone else, but I'll squash your post anyway.

Only a windows user (or possibly a Mac user) would treat 64-bit computing as useless or unavailable. Linux has been available in 64-bit versions since the days of the DEC Alpha, or since 2003 if you count only AMD64. Almost every Linux application benefits from recompiling for AMD64 as opposed to x86, because of the increased register space, and the nature of open source guarantees availablity of such versions. Compute-intensive applications such as cryptography (ssh/scp over gigabit ethernet) and media encoding (ogg, mp3, mpeg) exhibit performance gains of over 100% with 64-bit operations owing to the quadratic nature of block multiplication.

Scientific applications such as Mathematica and Maple, which I require for my job, have been available for AMD64 almost from the beginning days of the hardware platform, and gain rather a lot from AMD64 not only in terms of CPU performance but also from the larger virtual address space.

Even if all of these things weren't true and Linux didn't exist, the fact is that Windows Vista (vaporware jokes aside) really is coming out in five months and really does spell the end of 32-bit computing for mainstream performance applications. Windows Vista isn't some half-unfinished 64/32 bit mixture like Windows 95 was a half-unfinished 32/16 bit mixture -- Vista is 64-bit through and through.

The fact that your elitist risc platforms had 64-bit addressing some 30 years ago is not relevant to this discussion. Like it or not, x86 has "won" the platform battles, and x86-64 (unlike Alpha, IA64, Sparc) is the first and only 64-bit computing platform that will be relevant for general purpose computing.

Thats nice. But most users use non-scientific applications that will see very little benefit from 64-bit ops. The extra register space is really not a big deal with a superscalar out-of-order where register renaming and high L1 bandwidth keeps the instructions moving. Stack spills/fills nearly always hit in the L1 and performance nowadays is becoming dominated by cache misses. 64-bit is certainly necessary to move beyond 4GB of address space (with segment tricks), but most users will see no benefit from

I agree that 64-bit has been supported in Linux for almost as long as Linux has been portable. I also agree that 64-bit is the future. What I do question however are those numbers. I don't think the simple action of recompiling for AMD64 will get any performance gain. From what other qualified individuals have told me (nothing I write is ever so large I benchmark 32-bit vs 64-bit versions to optimize), if you just recompile with a different target, your code will almost always be equal or slower in speed. I

This is Slashdot. The GP, I and probably a lot of other readers [slashdot.org] who are not average users care about performance in 64-bit mode. See, for example, I write 64-bit assembly code optimized for AMD processors. So far I have never had the chance to evaluate a Core 2 CPU. So, like the GP, I would like to see 64-bit benchmarks of Core 2 CPUs. Is it so hard to understand ?

Sure, average users don't care about 64-bit performance. But average users don't care about performance *at all*. If you just want to use online poker, MSN and websites, the speed of your CPU is nowadays completely irrelevant. Anything you buy will be ridiculously overspecified for those tasks.Gaming is about the only mainstream Joe Consumer application that cares about hardware speed, and even that is mostly dominated by the video card not the CPU.

I agree that having multiple pages just to increase ad impressions is a little annoying, but why do people gripe about a site trying to profit from its work/research/journalism/whatever? How else is it going to make money/pay for the bandwidth that slashdot/anyone generates? Get off your soapbox!

I agree that having multiple pages just to increase ad impressions is a little annoying, but why do people gripe about a site trying to profit from its work/research/journalism/whatever? How else is it going to make money/pay for the bandwidth that slashdot/anyone generates? Get off your soapbox!

1) Some things make sense to be split to multiple pages. Such as cases where you have 10 images for each of the benchmarks that you ran and you put them on multiple pages to make life easier for dial-up users.

The Core 2 Duos are tremendously and easily overclockable. I upped my performance 25% by changing the FSB from 266 to 333. While this sounds like a significant overclock, for the Core 2 Duo it is actually rather conservative. You juse switch to DDR-667 memory. I'm using the stock Intel cooler and my chips are running just fine temperature wise. People who are more ambitious are going for 400+. When you combine the inherent performance and value in the line with the ease of significant overclocking, AMD isn't even in the same ball game anymore.

For about the price of just your processor I got a 2.4ghz Athlon 64 processor, 800 mhz ddr2, am2 motherboard, and geforce 7600gt. This entire system draws 100-102 watts when idle including 3 hard drives (according to kill-a-watt). CPU runs at ~35C @ 1100 rpm and is silent. This isn't a top-notch system of course, but it's pretty decent.I'm glad I had a dual core pentium d at work (ie that I didn't pay for so didn't have to rationalize) to see that the vast majority of the time the extra core doesn't get

It's completely true that at the "lower" end of the spectrum (i.e. pre core 2 duo), that you get better bang for your back w/ AMD. However, if you are planning on spending ~$200+ on a processor, then you clearly get more value out of the Core 2 Duo.As to the value of dual cores, I'm a huge fan. I assume that for most games it doesn't make much of a difference, but if you are power using your computer (i.e. doing development work or running other CPU intensive applications, its a godsend).

Quite a few people seem to have missed what seems to be a pretty obvious problem: the choices they've made as to what Intel processor to compare to what AMD processor just don't make sense. Look at the price table:

In every case, the Intel processor more expensive than the AMD to which they compare it. The Intel E6700 is over 60% more expensive than the AMD 5000+ they consider comparable. The Intel E6300 is not only more expensive than the AMD 3800+, but also more expensive than AMD's next step up, the 4200+.

Given their prices, the E6300 should obviously be compared to the 4200+ rather than to the 3800+. Looking at this particular pairing, rather than the nearly clean sweep for Intel, they each win some and lose some. If you simply count wins, the Intel wins more than the AMD -- but to mean much, you need to look at what they win at, not just how many different benchmarks they win. Just for example, PCMark05 goes 3:1 in favor of the E6300 -- but quite frankly, none of PCMark05 really means a thing.

Unless money is no object to you, the two lines look pretty closely matched. In video encoding and rendering tasks, Intel wins quite easily. In the ScienceMark scores, AMD wins pretty easily. Elsewhere, a lot are really too close to call based on the data provided. There are a number of cases in which each wins by less than 2%. It's impossible to say for sure without knowing things like the standard deviations on these scores, but there's a pretty fair chance they have no statistical significance at all.

I was referring to ScienceMark results not being included in the "Scaling" graphs at the bottoms of the linked pages. I like those graphs, except I wish they would use cost (in $) as the X axis instead of simply lining up pairs of processors which, as observed above, are not equal in price. But my point is, even if you shift the Intel line to the left one or usually even two notches, it still dominates the AMD line.

Intell can afford to go lower, they have more $$ and can sustain smaller proffits or even losses for longer and the other benifits of being the larger company play in.
Also in part because the CPU has to go into a motherboard. If the chip is $60 less but the motherboard is $120 more the amd solution is cheaper perfomance wise. Once Intell gets supply up on the motherboards this factor fades fairly quick however.

They will be forced to drop the prices, because Intel is ramping up Conroe production. It is only a matter of time until the supply can meet demand.Fortunately for AMD, they have managed to start their own 65nm manufacturing now.http://amd.vendors.slashdot.org/article.pl?sid=06/ 09/12/0159259 [slashdot.org] That should allow them to reduce their own manufacturing costs and afford the price cut.

Fortunately for AMD, they have managed to start their own 65nm manufacturing now.

There were articles about 2 weeks ago about how they're having issues with 65nm. They can't get the voltages as low as they want, which is requiring increased power requirements. Dunno if they've got it sorted out yet.

They do have some technical advantages over Intel once they get 65nm working (such as SS and SOI which I don't think Intel uses).

AMD already dropped prices on Athlon64 back in July (and that was scheduled a few months in advance, maybe as early as Feb 2006). Their next scheduled price are in October, but are only on the Sempron or Turion chip lines.

Any price drops outside of those 2 months or on chips other then Sempron/Turion would be news.

Comparing performance, I can only see this: (hopefully with a minimum of errors)- SysMark, a $230 E6400 performs nearly as a $825 FX-52.- PCMark05, $230 E6400 similar to $346 5000+.- ScienceMark, $230 E6400 similar to $187 4200+.- 3DS Max 7, $230 E6400 between the $346 and $825 Athlons.- Cinebench, $230 E6400 a little better than a $253 4600+.- 3DS Max 7 (rendering), $230 E6400 between $253 and $346 Athlons.- LightWave, no Athlons are close to touching even a $190 Core 2 Duo.- POVRay, $230 E6400 as $825 FX-

Huh? At 2.4GHz and below, the price difference is about equivalent to a nice dinner for two with a good bottle of wine. Stay in for a night and you've saved enough to get the better option. It certainly shouldn't put anyone off.

I can buy matching up clock speeds as a possibility, but even assuming it's true, why does it make sense? If I'm buying a CPU, I'm concerned with what I'm going to get for my money. Anybody who still hasn't noticed that clock speed means nearly nothing needs to find a different line of work.

As the owner of AMD socket 939 processor I'm going to skip any AM2 socket mainboard. It's time that processor manufactures realize the time to change sockets each year is over. Either they are able to foresee the socket interface for the next 5 years or they have to provide processors for any socket within that time line. The next mainboard I buy is the one which comes closed to this goal and mainboard manufactures are well advise to request this from their processor suppliers. I'll stick to this policy sin

Since AMD cpu's have the memory controller on board when a change ismade to use new memory technology the cpu (and socket) must change.In the case of AM2 it appears that DDR2 and the proposed DDR3 willbe compatible enough that these two cpu/socket designs will allowsome backward compatibility. However you can't use DDR memory onAM2 cpu's, nor can socket 939 processors use DDR2.However ths situation is the same for other memory technologies.PC133 memory can't be used on a DDR motherboard, nor visa-versaeve

939 has been out for well over a year now, and it's on the way out. The best time to buy a new socket is when it's brand spaking new because you can potentially get the most use out of your motherboard that way. Having bought an early 754 Athlon 64 I skipped out on 939 altogether and now have an AM2 setup.

Then again, I sympathize with you - especially since socket AM2 has 939 pins!

However, another interesting thing is that Intel is very open source friendly. Intel's new top of the line graphics adapters (found on some core2duo motherboards) have _FULLY_ open source Linux drivers! That is a _BIG_ thing! You can find more information HERE [intellinuxgraphics.org]. Imagine! Now you can have fully open source OS without any binary drivers messing up your system. These on board graphics adapters are also very fast and capable, so it's a big thing to many of us.

We all know that the performance crown (and certainly the performance per watt crown) went back to Intel with the Core 2 Duo. But at the medium and low price range the performance per $$ is contested. At the beginning the article says that you can't compare the product lines any more (so comparing simply by price would be best) and then make up their own comparison table where they put each AMD processor next to significantly more expensive Intel model. This BS to the highest degree.64bit is a completely di

Not really responding to anyone in specific, but this seemed a resonable place to bring this up.
You need a motherboard for these cpu's and really should consider the cost of that as well, much else can be retained from the system being upgraded, but not Mb's and in many cases the ram.
Last I heard Intell wasn't shipping to meet demand on MB's such that what you save in cpu cost you loose in MB cost.
This will probablly change for the better, but with the rate at w

Processors have become a commodity. You buy as much processor performance as you need or can afford. The Intel and AMD processors are all great right now...well all except the old Intel P4 and Celeron stuff but that will be mostly gone in a few months anyway. Move along...there's no story here.

ok, I'll bite....

This is slashdot. We look at specs and drool. We crave machines with 64 gigs of ram, and a solid state hard drive in the petabyte range. If there is some way to make things blinky or shiny, someone is wondering how much longer their kids can put off braces. If someone comes out with a way to make IE 7 beta 4 load pages 3% faster, someone is going to be running tests all night long. It's news for nerds, stuff that matters. Go troll on digg or break.com and you'll have a point, but not here.

All in all I'm glad that Intel has decided to retake the lead in the price/performance war, AMD needs a new kick in the pants.

$DIETY bless the blinky bits, there's a lot of joy to be had at watching the flickering lights of a multi-disk array that you built yourself. Watching in real-time as different disks get used to service requests can be a bit mesmerizing. (Plus it points out a possible performance issue with mdadm's RAID10 implementation when disk 0 has a higher utilization then the other disks.) Of course, I used to be entranced by the lights on the front of my 14.4Kbps modem.

If someone comes out with a way to make IE 7 beta 4 load pages 3% faster, someone is going to be running tests all night long. It's news for nerds, stuff that matters.

I'd agree with you if this was actually new news but it's not and I don't. It was news a few months ago when Intel came out with the Conroe and enlarged the hardware performance envelope a little bit but it's not news now and so it doesn't matter. Software is waaaaay behind the silicon. It's the software news that matters right now and ther

I'm not going hunting for the links, but all power consumption comparisons I've seen show intel in the lead over AMD now in terms of power consumption vs. performance.I'm not sure about dollar for dollar any more, AMD stuff is going cheap now because they've lost the lead, especially if you don't mind relatively poor performance.

dB for db? since when did processors make noise? If you're talking about their respective heat output for equivalent performance, again it seems intel are now ahead. The core 2 runs

Many of the reviews I've seen show that the AMD systems consume significantly less power at idle than the equivalent C2D system. Whilst the C2D is pretty much undoubtedly the faster of the two arches, I'm still pretty staggered by the energy efficiency of the AMD64. As anothe poster pointed out, AMD's cherry picked ADD chips (well, the 3800 X2 ADD anyway) consume utterly tiny amounts of power, even on an appallingly stone aged 90nm lith process;) I can't wait for AMD's 65nm to start shipping once their pro

I agree - I'm very concerned about idling power. If the idling power can be brought low enough on an entire system, say 5 W, I could leave it on all the time and not bother with trying to get nvram wakeup schemes to work so an HTPC can start up and record a show. I'm typing this on a Windows Centrino laptop now and it annoys the hell out of me how hot the thing gets when I'm doing nothing but scrolling through a document.This brings up the general issue - why aren't reviewers covering all power efficiency

Not playing games is the reason. You're only using applications and most applications will be speedy with a 3.4 GHz machine. The applications that take 45 minutes+ could be speeded up, but will you notice it if you are not at the computer while it is processing?

I'm a PC gamer in need of an upgrade. I'm seriously considering going back to consoles since online gaming with them has really taken off. In that case, I probably would not upgrade my computer anytime soon.

If you're a true PC gamer you'll get frustrated and bored with console games pretty quickly. The load times are intolerable, buying memory cards is insulting, running around levels looking for save points is infuriating, the textures are low res, the controls are way oversimplified, the cameras are crap, the controller is way less precise than a mouse... I could go on and on.

Buying a PS2 was actually what clued me in that it was time to upgrade my computer. Some PS2 games are incredible and totally irr

But today I looked at hardware prices and found out that my 2 year old 3.4 GHz Intel motherboard with AGP bus is hopelessly outdated, and that you can get a dual core Intel CPU cheap.

I just wished I could wave the same magic over my 0.001 Gig Internet connection. A super fast system that can render full motion video is fine as long as the video isn't Buffering 02% Complete Buffering 03% complete....