Post Your Comment

144 Comments

I think a better option for testing compiling speed would be to pass a -j argument to make when compiling FireFox, and tell it to run as many parallel operations as the processor can take threads. IE: -j2 for a dual core or ht cpuReply

So they're reproducible, but only in secret. And you knew, as usual, about mistakes you were making, but made them anyway to, um, make a valid comparison to something else that no one can verify. Nicely done. Whatever they're paying you, it's not enough.Reply

You are correct you can not reproduce them, but we can and have 10's of times over the last year w/ different hardware. I do not believe that because you cannot reproduce them discounts their validity but it does require you have a small amount of trust in us.

We have detailed the interaction of the application with the database. With this description you should be able to draw conclusions as to whether it matches the profile of your applications and database servers. Keep in mind, when it comes to performance tuning the most command phrase is "it depends". This means that there are so many variables in a test, that unless all are carefully maintained the results will vary greatly. So, even if you could reproduce it I would not recommend a change to your application hardware until it was validated with your own application as the benchmark.

The owner of the benchmark is not AMD, or Intel, or anyone remotely related to PC hardware.

I think if you can get beyond the trust factor there is a lot to gain from the benchmarks and our tests.
Reply

I can't see why anyone would choose the Intel dually over AMD unless all the AMDs are sold out.

Intel needs to get off their arse and design a true dual core chip instead of just slapping two "unconnected" processors on one chip. The fact that the processors have to communicate with each other by going outside the chip is what killed Intel in all the benchmarks.Reply

From your article:
" We cannot reveal the identity of the Corporation that provided us with the application because of non-disclosure agreements in place. As a result, we will not go into specifics of the application, but rather provide an overview of its database interaction so that you can grasp the profile of this application, and understand the results of the tests better (and how they relate to your database environment)."

Then don't include them. Benchmarking tools to which no one else has access is not scientific because it can't be reproduced so that anyone with a similar setup can verify the results.

I don't even know what they do. How are they imporatant to me? How will this translate to anything real world I need to do? How can I trust the mysterious company? Could be AMD for all I know.
Reply

How can it be the best for the buck ? Unless you are seeing benchmarks from Anand that says so how could come to the conclusion?
At some tests the 3800+ was the worse performer while the X2 and PD where the best.

i think what #130 was saying was that: from top to bottom, AMD's offerings are really good...if you want the best "bang for the buck" the 3400+ or whatever, or a 3000+ winnie OC'd will provide you with the best performance per dollar you spend...EVEN against the X2's.

On the other hand if cost is not an issue, an X2 4400+ provides extremely good performance for people willing to pay the $500 premium.

Zebo's point is in direct response to your point, which is AMD "STILL" has the best bang for the buck, not intel.

Compare:
A. singletreaded 32-bit app on a singlecore
B. multi-threaded 64-bit app on a dualcore
Considering that multithreaded apps already see such large gains on dualcores, going 64-bit too could well mean a more than 100% improvement from A to B.

But of course, NO ONE needs dual core, 64-bit and +4GB memory in the next 5-10 years :P

The ball now lies with MS and (Linux) app developpers to write more stuff in multithreaded 64-bit code. From what I hear and read it is not so much the 64-bit part as it is the threading that is a real challenge, even for veterans.Reply

Anyway, im really excited about this development of computing, not having good multitasking ability feels so outdated, i've been crying about that for years, and fianlly its here...
Well, almost, and its probably another year before i can afford it, but still... :)Reply

Again, the lack of technical superiority of AT's "experts" is obvious. On SQL Server, you're not supposed to prepend stored-procedure names with "sp_", as it introduces a performance penalty. This is basic knowledge. Some have remarked before on how their .NET "experts" code like, um, transplanted ColdFusion "experts". :)Reply

a minor error: on page 12 right above the graph it says "The Dual Opteron 252's lead by 19% over the closest Xeon, which was the Quad Xeon 3.6 GHz 667MHz FSB" but the slowest xeon is the 3.3 GHz one.Reply

I appreciate the article but am disappointed by the misleading title... AMD's dual core Opteron & Athlon 64 X2 - Server/Desktop Performance Preview. The 939 is not equal to 940. Also, the article clearly says
COMPARE ATHLON 64 X2... right on the 1st page. In fact the article does not admit to "not having a real x2" until page 13. I love reading anandtech's articles and visit frequently... Perhaps a better title would have been... Preview of Athlon64 X2 using an Opteron CPU. Reply

Frankey Jep I'm not buying it. It would cost AMD signifigantly more to make these dual 1MB L2 cores different at the core level. 8XX, 2XX, 1XX, and X2 are identical except for tracing in the pakageing and pins to make them function differently. Check out Tomshardware's recent CPU article about AMD manufacturing and you'll see what I'm talking.Reply

"It's odd that some picture game developers immediately supporting the PhysX chip as soon as it's available, but think they'll drag their feet to take advantage of another whole CPU core at their disposal."

It's basically about the implementation differences of the two. You can be relatively certain that PhysX is going to be shipping their chips/cards with libraries that allow game devs to just speed up certain processing with special function calls (ie, calculate_particle_spread()). Multi-threading requires that you design your application from the very start to take advantage of it (mostly - I would wager splitting off the background music to its own thread is reasonably straightforward).

Game logic doesn't always lend itself to multi-threading, either. If I shoot my gun, I want to hear the sound next. I don't want it to be thrown at the sound thread, where it may or may not execute next. Threading introduces latency, in other words, unless you so tightly bind your threads together that you may as well not use multi-threading.

#83 Get a clue, a single core 3500+ is faster than the quivelant Opteron at the same speed. Why? Unregistered memory and tigher memory timinings. ECC memory comes with a 2-4% performance penalty but the big difference comes with the command speed, 2T for the Opteron and 1T 3500+, the AMD64 thrives on lower lower latancies that can make as big as an 10% performance difference and that is BEFORE we start to even think about raising the FSB speed which makes a significant difference to overall system perfomance. 15% is in no way unrealistic with a mild overclock and lower latancies, if you don't believe me then email Anand and ask him. Reply

#40 (Doormat):
You're forgetting that the size of a dual-core is (roughly) double that of a single-core. So, assuming 1000 cores/wafer, 70% defect rate per core, then a single-core wafer (with an ASP of $500) will net AMD 700*500 = $350K.

The same wafer with dual-cores will produce (approximately) 1000/2 * (0.7)^2 = 245 CPUs. So, to get the same amount of cash per wafer, AMD needs an ASP of $1429, or the second core costing 85% more than the first core.

Of course, it's not quite this simple ("bad" chips running OK at lower speeds, etc) but it's not entirely unreasonable to see dual-cores with prices ~3 times that of a single core at the same speed grade. Intel is almost dumping (in the economic sense of the word) dual-core chips.Reply

Anyway, yes they both use c syntax, however thats pretty much irrelevent given that Java also uses c syntax (as does Managed c++ which incidently IS the .net language directly based on c++) and I've never heard anyone call it related to c++. Beyond (some) syntax heritage and the fact that they're both OO langauges, they're very different beasts.

""C# is directly related to C and C++. This is not just an idea, this is real. As you recall C is a root for C++ and C++ is a superset of C. C and C++ shares several syntax, library and functionality." Quoted from above.

L8r."

Err yeah c++ is mostly a superset of c++. Thats neither here nor there. Just try and use the c/c++ preprocessor in c# and you'll see very quickly what the difference is. Or try using c++ multiple inherritance. You'll find that just because you took java and added operator overloading and made binding static by default, its not c++.Reply

If you use a Opteron 875 then label it as such in all diagrams. You can make a note that the Athlon64 X2 4400+ will perform similarly to the Opteron 875. The differences in MB and RAM will affect results and so a direct re-labelling should not be made.
Good database, multimedia, data analysis should make good use of multi-core/multi-CPU systems. When I mention data analysis I'm talking about software like SAS 9.1.3 and SAP. Even SAS is only threaded for a few tasks and is a big hassel to pipeline one step into another.Reply

Good article overall, although I question the validity of declaring that an Opteron 875 is roughly equivalent to an Athlon 64 4400+. I could be wrong, but surely there must be significant architectural differences between the server-class chip (top of the line server-class chip no less) and the desktop Athlon 64? If not then why the price premium for Opterons, and why don't manufacturers just find a way to kludge the Athlon64 to work in MP configurations as in theory if they are really equivalent when run at the same clock speed, it would be much more cost effective to use kludged Athlon 64's, and it would also let higher performance levels to be reached as the dual-core Athlon64's are slated to run at one clock increment higher than the fastest dual-core Opteron's? So anyways, is it *really* valid to treat an Opteron as being essentially equivalent to a similarly clocked Athlon64? As much as I love finally seeing Intel chips trounced pretty much across the board, it seems to me like the results could potentially be inaccurate given that an Opteron 875 was used and simply "labeled" as an Athlon64 4400+. Reply

#89... seeing as how the Opty x75 and A64 X2 are based on functionally identical cores, thats not too likely at all. What DOES seem likely to me reading this article is that BIOS updates, and X2 support on 939 boards, is going to be a very interesting story to follow. It doesnt look like its too easy to get a solid AMD Dual Core BIOS if even Tyan is struggling, of all board mfts. May give a fiesty smaller board mft a chance to slam the bigboys and grab marketshare (such as ECS with the K7S5A).Reply

"C# is directly related to C and C++. This is not just an idea, this is real. As you recall C is a root for C++ and C++ is a superset of C. C and C++ shares several syntax, library and functionality." Quoted from above.

#86 - the r_smp cvar was disabled in quake3 in a patch, for a reason i don't know. i confirmed this by having quake3 crash on my p4 HT CPU with that setting enabled. as for doom3, i'm not sure. i'm guessing it's not implemented well enough yet...Reply

#83:
"Real gamers" may use a single core, but I have been hankering for duallies since I tried an older dual G4 to my newer single G4. Even on the crappy MaxBus, I could browse the web, chat, do "real work" and game, without having everything go to pot when a bolus of e-mail came in.
When you buy a dualie of any type, you buy the ability to do other stuff while you computer working on its latest task. Remember that when you get lagged while Outlook downloads your latest spam.

Why wern't there any SMP Tests done on Quake 3 engine, after all it is said to be multithreaded.

Also, Carmack said during the devlopment of DOOM3 that the engine was going to support multiple processors, did this ever happen? Does anyone know what the command might be for D3 console to enable SMP, like it's cousin? How much truth is there to this?Reply

Add to all the arguments that we can potentially see programs taking advantage of this quite soon...without the effort required to implement full multi-threading, game functions could be assigned to use the other processor if it's available. For example, AI can be done by one core, while the other core does the rest of running the game.
Reply

" The three main languages used with .NET are: C# (similar to C++), VB.NET (somewhat similar to VB), and J# (fairly close to JAVA). Whatever language in which you write your code, it is compiled into an intermediate language - CIL (Common Intermediate Language). It is then managed and executed by the CLR (Common Language Runtime).
"

Waaah?

C# is not similar c++, its not even like it. Its dervived from MS's experience with Java, and its intended to replace J#, Java and J++. Finally the language which is similar to c++ is managed c++ which is generally listed as the other main .net language.

Tell me how much faster a singlecore 4000+ is compared to a 3800+ and you see it´s less than 5% in average, mostly about 2%. Your X2 2.2Ghz 1MB cache will perform same or max 1-1½% faster than a singlecore 2.2Ghz 1MB cache in games. So 10% lower for is kinda bad for a CPU with higher rating. Memory wont help you that much, cept in a fantasy world.

And less take a game as example.
Halo:
127.7FPS with a 2.4Ghz 512KB cache singlecore.
119.4FPS with a 2.2Ghz 1MB cache dualcore.
And we all know they basicly have the same PR rating due to cache difference.

And the cheap singlecore here beats the more expensive, powerusing and hotter dualcore CPU with 7% faster speed.

So instead of paying 300$ for a CPU, you now pay 500$+, get worse gaming speeds, more heat, more powerusage....for what, waiting 2 years on game developers? Or some 64bit rescue magic that makes SMP for anything possible? It´s even worse for intel with their crappy prescotts, 3.2Ghz vs 3.8Ghz. Atleast AMD is close to the top CPU, but still abit away.

Forgot to add about physics engine etc. For that alone you can add a dummy dedicated chip on say..a GFX card for it for 5-10$ more that will do it 10x faster than any dualcore CPU we will see the rest of the year. Kinda like GFX cards are 10s of times faster than our CPUs for that purpose. Not like good o´days where we used the CPU to render 3D in games.

The CPUs are getting more and more irrelevant in games. Just look how a CPu that performs worse in anything else as the Pentium M can own everything in gaming. Tho it lacks all the goodies the P4 and AMD64 got.

It makes one wonder what they actually develop CPUs after, since 95% is gaming, 5% workstation/servers and corporate PCs could work perfectly and more with a K5-K6/P2-P3.

Then we could also stay with some 100W or 200W PSU rather than a 400W, 500W or 700W.Reply

#79 Are you smoking something? A dual core 4400+ running with slow server memory and timing plus no NCQ drive peformed within 91% of the fastest gaming chip around. Now the real X2 4400+ with get at least a 15% pefromance boost from faster memory timings and unregistered memory and that is before we even think about overclocking at all.Reply

Dualcores are completely useless the next 2+ years unless you use your PC as a workstation for CAD, photoshop and heavy encoding of movies.

And WinXP 64bit will be toy/useless the next 1-2years aswell, unless you use it for servers.

Hype, hype, hype...

In 2 years when these current intel and amd cores are outdated and we have pentium V/VI or M2/3 and K8/K9. Then we can benefit from it. But look back in the mirror. Those early AMD64 and those lowspeed Pentium 4 with 64bit wont really be used for 64bit. Because when we finally get a stable driverset and windows on windows enviroment. Then we will all be using Longhorn and nextgen CPUs.

Dualcores will be slower than singlecores in games for a LONG LONG time. And it will be MORE expensive. And utterly useless cept for bragging rights. Ask all those people using dual xeons, dual opterons today how much gaming benefit they have. Oh ye, but hey lets all make some lunatic assumption that i´m downloading with p2p at 100mbit so 1 CPU will be 100% loaded while im encoding a movie meanwhile. yes then you can play your game on CPU#2. But how often is that likely to happen anyway. And all those multitasking things will just cripple your HD anyway and kill the sweet heaven there.

It´s a wakeup call before you be the fools.
games for 64bit and dualcores ain´t even started yet. So they will have their 1-3years developmenttime before we see them on the market. And if it´s 1 year it´s usually a crapgame ;)Reply

"Armed with the DivX 5.2.1 and the AutoGK front end for Gordian Knot..."

AutoGK and Gordian Knot are front ends for several common aps, but AutoGK doesn't use Gordian Knot at all. AutoGK and Gordian Knot are completely Independent programs. len0x, the developer of AutoGK, is also a contributor to Gordian Knot development too. That's the connection.

Hmm, it does seem that dual core with hyperthreading can be a real help and yet some times a real hinderance. Some benchmarks show it giving stellar performance and some show it slowing the cpu right down by swamping it. Some very hit and miss results for Intel's top dual core part there makes me wonder if it is really worth the extra money for something that can be so unreliable in certain situations.Reply

Anand, Jason and Ross.. hell of a job guys, you have out done yourselves. As for the X2 4400+ preview results, holy shit is all I can say, better than I expected and those scores are WITHOUT the aid of an NCQ enabled drive. The cost is high, very high infact but the X2 just scales so much better than the equivelent Intel. All I want to see now is an X2 4400+ with the FSB overclocked to DDR500 speeds, I am really interested to see how much that extra 1gb/s+ of bandwidth helps a dual core setup. Perhaps that is something you can look into for us please Anand and Co? T.I.A. ;)Reply

65- Recommend you reread "A Look at AMD's Dual Core Architecture" page. The fact that AMD's Athlon64 and X2 memory controllers are on-die gives it a leg up on Intel's Pentium D's. On the X2, the communication between the two cores doesn't have to traverse the external FSB.Reply

From the article: "Although the use of ECC memory and a workstation motherboard would inevitably mean that performance will be slower than what will be when the real Athlon 64 X2s launch, its close enough to get a good idea of the competitiveness of the Athlon 64 X2."

Anand didn't "cripple" or "misrepresent" anything. He got as close as he could with the materials available to him, and made it clear that some liberties/extrapolation would be required.

However, it does look promising that the X2 will perform even better than projected today. Just as Anand said up front.
Reply

Anand crippled/misrepresented it by running a 175 in his tests... Which has ECC memory, 2T, and my guess is 3-3-3 (most all ECC ram is 3-3-3 since he does'nt say I must go with the odds).

Talk about hamstringing a A64. Anands own tests show just how crippleing 2T is for A64 upwards of 10% alone less performance. I've shown 3-3-3 vs 2-2-2 to be signifigant in my mem matrix tread about 5% since A64's love low latency. ECC knocks out about 3-5% more performance due to extra wait state. Would the "real" X2 debuting at 18% faster be unfair?? I don't think so when paired with desktop memory.

It's going to get REAL ugly on the desktop for Team Blue no matter how you slice the numbers when a real live X2 comes with un-buffered mem, LL and 1T since Intel already loses to a unadventurous server chip right now. Reply

64 - I'd like to know the same. I definitely won't buy a processor for more than 250, no matter what the performance is. I'm sure they'll drop eventually, but I wonder if that'll happen before 939 is completely obsolete and I have to buy an M2 mobo anyways...

Also, something I've been wondering: if dual core does have such an impressive effect on desktop performance and future programs will be multithreaded to take advantage of dual core, how come nobody ever talks about making multi-socket desktop boards? A dual-939 setup with a couple of $120 OC'd Winnie's would be just as fast as the X-2 and a heck of a lot cheaper. Or you could slap a couple of X-2's in there when they actually come out and have sick performance.Reply

I really enjoyed taking a look at what you could bring us about these upcoming Athlon dual core processors. It looks like dual core will be the future for all of us, at least at some point.

Just a quick comment on the price comparisons that you provided between the dual core opterons and their single core predecessors, I found it interesting to compare prices on the basis of the number of cores.

So,

Opteron 248: 2x$455=$910
Opteron 174: $999

Opteron 848: 4x$873=$3492
Opteron 275: 2x1299=$2598

Assuming the performance scales simply based on the number of cores involved, the pricing of the new dual core opterons looks more attractive.

It's odd that some picture game developers immediately supporting the PhysX chip as soon as it's available, but think they'll drag their feet to take advantage of another whole CPU core at their disposal.

Maybe that will be the reality though, as MT programming is supposed to be a lot harder. Still, to be able to get a game out the door that blows away any of the competition, it might happen sooner than we think. And I could see how Intel would want to push this along to help their sales, and might contribute resources towards making it happen. "OMG, that new game is great, but it totally rules on a new dual core rig! Saw it at my friend's house the other day!"

Who knows, maybe games'll gobble up that second core so fast, it won't be long before we complain about how sluggish the system is when multi-tasking, and that we're shutting down background processes, anti-virus, etc all over again. "We need quad core!" :PReply

Ah, well... most of you are forgetting something: sure, the chip's cost is almost 50% higher than the cheapest Intel offer, however, to use a Pentium D, you require a new motherboard (i955x @ 180USD, probably... nF4 IE @ 200USD), AND DDR2 memory... plus, if you have an AGP card, the PCIe video card as well. That's about 650USD for the whole Intel upgrade. AMD, on the other side, is just the processor, which ends up being FAR cheaper.

It's too bad that gamers or people that don't multitask are basically left in the dark (extra-performance-wise) by dual-core. I'm not going to break the bank for something that's going to give me less performance than I already have.

There's multitasking and then there's multitasking. One kind is having a main program up that gets most of the CPU's attention and another BitTorrent or whatever that's taking up <5% CPU. Usually I'm not trying to encode a video while I play a game, which would be the other type of multitasking. Only the second kind would greatly benefit from these new CPUs, which is a shame. In the first multitasking type I talked about, dual core will improve responsiveness but not raw processing performance.

Does this mark the end for single-threaded performance? Programmers will have a hell of a time creating dual core-beneficial applications, unless by nature the program would benefit from it (i.e. a game server browser, or an AI-heavy game). If the PhysX chip comes through, dual-core won't help too much with physics either. The only benefit that would ever see the light of day for me is the fact that the rest of my system isn't lagged while something else is taking up 100% CPU. For example I could still move my mouse and use Windows Explorer while I'm compressing some files with WinRAR. Even then these scanarios don't come up too often for me personally.

When it comes to raw number crunching performance, the dual-core CPUs don't show any improvement over single-core ones. Sadly enough I think it's going to take forever for programmers to multi-thread their applications. That being said, any program I make from now on will be multithreaded as much as possible.Reply

...but I do have one complaint. I would like to have seen a top of the line Intel single core CPU (there were no single core P4s in your tests) compared to "X2 4400+" and a 3500+ 939 Athlon 64 which runs at the same 2,2 GHz as the "X2 4400+" to see a direct effect of the second core instead of the 2,4 GHz 3800+.

Some multitasking tests were a bit weird, dare I say unrealistic, but OK for starters. The way I multitask is usually a bunch of IE windows (12 ATM), one folding at home client, 3 - 6 bittornado clients, 1 or 2 (sometimes more) word documents, 1 or 2 (sometimes more) excel documents, outlook express, possibly Photoshop CS, a bunch of Windows Explorer windows, few notepads, some winzip/winrars every now and then, windows media player playing MP3s, Kaspersky antivirus, a dictionary, ACDsee from time to time, Opera with a few open tabs if IE isn't right for the job, ... and I rarely play any games anymore. OK I think that's it. This is not at all uncommon for me, so I'm really looking forward to dual cores, I'm just very sorry that AMD can't offer anything at a competitive price, so instead of going for a Socket 939 from a socket 754, I might go for an Intel platform. I don't know jet, a lot depends on how hard those PD are too cool. No word on that yet from you. I wonder why?Reply

page 3 "For example, the Opteron 252 and Opteron 852 both run at 2.6GHz, but the 252 is for use in up to 2-way configurations, while the 852 is certified for use in 4- and 8-way configurations. The two chips are identical; it's just that one has been run through additional validation and costs a lot more. "
I thought that they had different number of HyperTransport (HT) links:
152 - 1 HT
252 - 2 HT
852 - 3 HT
I thought that was the reason why it was impossible to use two 152s in a two-way motherboard.
Maybe i'm wrong.Reply

I just am astounded at the performance these first versions of dual core processing that is being presented to us...WOW...couple that with a well written 64 bit OS and it will be even more smoking!! I think AMD did a job extremely well done and I am glad that they are being aggressive in keeping their pockets full with the prices of their chips. I personally dont think that would stop me from buying their processors. I would wait for the FX to become dual core though. A 3 ghz dual core FX would rock so bad!!!

Also one more thing...so the Tyan mobo holds 2 procs correct? So if we stuck a dual core in 1 socket and another in the other sock...that makes it a 4 proc machine then right?!!Reply

I don't like how you use the Opteron to give a rough estimation on the A64 X2 as their are other architectural changes between Opteron and A64

That aside maybe AMD could bring out X2s using 256KB of cache per core to get slightly lower price points and atleast compete with the 830(3ghz)
I doubt it'll be too bandwidth limited given AMD is selling Semprons with only 128KB of L2 cacheReply

The high price of the dual core opterons kinda puts me off. I was hoping for 2x the price of the single core, instead of 3.5x (I'm looking at 246 vs 270s). It looks like I'll be going single core (or just holding off) instead of dual core (at least until the end of the year and AMD gets price competition from Intel on the server DC front).

The 3.5x doesn't even make sense from the yield standpoint. If AMD's yeilds are 70% (wild talking-out-of-my-ass guess, no real factual grounding in picking that number), then their dual core yields will only be 49% (70% for the first core, 70% for the second core). So out of a batch of 1000 chips, instead of 700 you only get 490. Thats 210 chips you need to make up for. If opterons have a Avg selling price of $500, then the "adjusted" selling price would be around $715, an increase of 43%, not 250%. Granted, if AMD's yields are higher, the numbers look better (from our perspective - lower prices), but if their yields are less, it looks really bad (if their yield was only 50%, they'd only get 25% yield on dual core, and would have to double price).

I guess AMD is just trying to squeeze every dime they can out of this... hopefully that extra money goes to pay for Fab36 and more capacity. Reply

Wow....Very impressive offering from AMD. I think the quote that sums it up best for me is: "you no longer have to make a performance decision between great overall performance or great media encoding performance, AMD delivers both with the Athlon 64 X2."

I was very impressed with Intel dual core chips, but now I know that my next system will go back to be AMD-based. Overall the dual core Athlon64 should be killer.

As for cost, yes it is expensive, but the performance is really phenomenal. I am sure that it too will come down. Reply

quote: Despite AMD's lead in getting dual core server/workstation CPUs out to market, Intel has very little reason to worry from a market penetration standpoint. We've seen that even with a multi-year performance advantage, it is very tough for AMD to steal any significant business away from Intel, and we expect that the same will continue to be the case with the dual core Opteron. It's unfortunate for AMD that all of their hard work will amount to very little compared to what Intel is able to ship, but that has always been reality when it comes to the AMD/Intel competition."
This statement should be qualified. The Rendering market is much more adventurous than the standard server market(didn't they use winxp-64 beta running on opterons to render SWIII?) and will continue to rapidly adopt opterons.There're tangible benefits (faster rendering, lower energy costs=$$$) in moving to opteron for rendering farms. Also more oems like supermicro and broadcom have embraced AMD which should result in much more rapid market penetration than 2 yrs ago.Reply

Makes good sense for AMD to keep their (server) dualcore chips pricey. AMD has limited manufacturing capacity, and they have best singlecore solution. In other words, they might as well keep the dualcore prices high, to a) make more money in cases where people are willing to fork over lots of money, and b) keep people who are on a budget interested in their singlecore offerings, at least until their new fab goes online.Reply

I have some comments about the Firefox compile test. First, thanks alot for including it. Now I have some comments about it. First, you are using GNU make and it supports parallel compiles. So, you should be able to replace the line:

make -f client.mk build_all

with the line:

make -j 2 -f client.mk build_all

to perform a parallel compile using 2 processors. The -j option specifies how many processors or threads you are using. You can do parallel compiles on a single processor machine as well as multi-processor or multi-core machines. It is often the case that using -j 2 or -j 3 on a single processor machine will give the best results because of it's allowing the overlaping of cpu computations and I/O.

You don't say whether you did a debug or optimized build. I would recommend doing both the debug and optimized builds and reporting the results of both. When doing parallel optimized compiles, you may want to make sure you are not swapping although for the server tests it looks like you have plenty of memory - 4 GBytes. I did not see immediately how much memory you were using for the X2 tests. Anyway, I would recommend doing both debug and optimized compiles with -j n where n is 1, 2, 3, and 4 or perhaps just 1, 2, and 4. Since compiles are essential to development work and also embarassingly parallel, this should provide a really good comparison of the multitasking capabilities of these systems.

Hope you can do this or at least some of it and thanks alot for adding a really good compile test to your test suite.

The server market is where AMD is going headed to get large margins in their chips. With Supermicro joining the AMD camp (they must have seen the performance of the Opteron dualcore, blinked their eyes and said, "we're in") Dell is left alone holding Intel only product lines. Intel will not have a response on the server front until Q1 2006. That is troubling for Intel because it give AMD six months of market buildup and Fab36 time to come online and increase volume tremendously. It should be interesting.

Imagine a 4800+ on a 939 DFI board running at 2-2-2-8 1t timings versus the P4 Extreme dualcore. Drooling just thinking about having either processor, but especially the AMDReply

This is exactly it. Why should AMD let demand outstrip supply? Just jack up the price until you've got just enough demand to consume your supply.

I mean, yes, I'd love an Athlon64 X2 5000+ with 1mb of cache for ~$250, but that's life. AMD stockholders should be pleased with this decision.

There's also the impending move to socket M2 to consider... the Athlon64 X2 makes sense for people with very low-end A64's, but M2 is going to be the better upgrade path for FX and/or 3800+ users. I would be surprised to see any 939 Athlon64's past 5200+.Reply

While our desires as desktop users are for high volumes of X2s at low prices, we have to balance that with what AMD as a company needs to survive...money. AMD is currently capacity constrained with regard to dual-core CPUs with only Fab30. They have entered into agreements with both IBM and Chartered for additional capacity (probably on the lower end chips), but that won't come online until late this year. Just before production starts to ramp at Fab36.

In the meantime, AMD has stated that their order of priority goes Server -> Mobile -> Desktop with the profitability motive in mind. For most users that will be heavily into the multi-tasking benefits of dual-core CPUs, spending $5xx for the low-end X2 vs $1000 for the PEE 840 will be a no-brainer. Seeing how that is a small minority of users, AMD can reasonbly supply the demand for them while still maintaining the highlest level of availability of dual-core Opterons at much better ASPs. Remember that AMD wants to capture as much market share in the server market as possible while Intel has no response.

As a share-holder, I hope that the demand for dual-core Opteron is deafening based on the incredible price/performance ratio (thus limiting their ability to produce X2 in high quantity). As a middle-of-the-road desktop user, I'm quite content with my mildly OC'd A64 for the next year or two.Reply

I think it is important to remember that the "Athlon64 X2" was actually an Opteron running ECC RAM at 2T on a less-than-stable motherboard. I think it is best think of this as a comparison of Intel's dual cores, AMD's single cores, and a hog-tied Athlon64 X2.
Makes you wonder how an actual X2 with fast memory on a fast motherboard will perfom.

Regardless, I'm really excited about the upgrade potential, and I hope that AMD sticks with socket 939 for a long while.

Another thing to consider is that the 939, and 940 Dual Cores work with old mobos. so that saves upgraders ~$100 for the mobo (I have no clue what PD mobo's are going to cost) plus the cost of DDR2 ~$100 for 512MB, not true if you're going with the PD. So if you own a 939 a 2.8 PD will realy cost you ~$441, 3.0 -$516, 3.2 ~$730.
Throw that out the window if you're starting from scratch however.Reply

Great review very detailed, I have one caveat, and I should preface it with the fact that I own 4 systems, and all of them have Athlon 64's or XP's.

That being said your "Multitasking Scenario 2: File Compression" Seems to be misleading, and identifies the A64 as the processor of choice in the 2nd portion of the analysis.

Maybe I'm wrong but from what I understand with how the test was performed you archived the file, got the amount of time it took to archive the file "x" for instance, and then figured out how many emails got imported in time "x" now because some processors took longer to create the archive than others they had more time to import emails. so in order to make this data more reflective of the performance of each processor you need to divide the number of emails imported by "x", and get e/s. The modified graph should be

We are talking about by this fall, folks. And a 90nm Dual core still uses LESS die than 2 2800+ put together because it is on a manufacturing smaller process, especially if they stick to a 512k cache which is obviously the sweet spot. By this fall I expect that we will see sub $200 dual cores from Intel to squeeze AMD on the desktop market. What is a real shame is that AMD doesn't have suffcient fabs to handle potential demand.

For those of you who got all hot in your pants and said I can't buy a sub $200 dual core Intel right now, it is just as true that I can't buy one it all! ;P Chill!Reply

Am I mistaken in thinking that AMD was throwing out their desktop dualcore line (A64 X2) in 2006? What prompted their earlier release of them? Was it the demand for desktop dualcore, pressure from Intel stealing desktop dualcore market, or excellent manufacturing at the AMD plants?Reply

#9 They'd have to release a 1.6 @ $240 to accurately compete with Intels slowness in Pentium D's starting at $241. Sorry not gonna happen. AMD not for budget shoppers anymore but those interested in performance. Want the best? Pay the price. Or substandard CPU at a discount?

I can see a 1.8 for about $250 though however no way you're going to get into DC for less than $200.Reply

Nice, dual-core. AMD's going to be hurt badly by the lack of volume on their X2 units, though, considering that Intel's got the money to post minor losses on each chip sold just to regain their marketshare. I'm surprised AMD hasn't tapped IBM to give them one or two 65nm fabs to prepare for the A64X2 launch later this year...Reply

Awesome...I wish we could have seen a 4 socket 8 processor system rocking out with those four way xeons though, that would really illustrates some differences ;)

I agree with the previous sentiment on the x2's, I hope they bring out a sub $200 1.8 ghz or so model. I will be sticking this in my desktop box, not my gaming box, so if they can't bring anything out under $200 I will probably have to go with Intel. Boo for that ;)

#4 AMD probably wants you to buy their single core cpus instead, as they are much cheaper to produce and easier to produce in quantities. AMD would probably have problems delievering a lower cost dual core in quantities .

Who doesn't drewl for a A64 X2 after seeing this review??? I certainly do.

The dual core intel wouldn't be so bad either, except for the amount of heat it produces off.Reply

Nice article. AMD has obviously awoken a sleeping giant, and Intel is fighting back on the pricing front. Hopefully the gamble that AMD single cores can hold their own versus Intel Duallies is true on the mid-low end (at least for the near future). I won't be buying an Intel chip anytime soon (unless I need a laptop).

Either way I figure I got 2.5 years before I need a dualcore, and by then who knows. So bravo to both companies for this innovation.Reply

Nicely done. The price will be a factor as usual. Does the performance gain justify the cost? For the enthusiast yes, but I will wait a bit. My 754 setup with a raptor still rocks plenty enough for me. The technology improvements are great. I will always be a big AMD Fan.