Posted
by
CmdrTaco
on Tuesday January 06, 2004 @12:14PM
from the upping-the-ante dept.

SpinnerBait writes "Unlike the Athlon 64 FX-51, this new
3400+ rated Processor, has a 64 bit memory interface, with its integrated memory
controller, drops in at several hundred dollars less than an FX-51 and is also
clocked at 2.2GHz. It gives a P4 3.2GHz Canterwood based machine a run for
its money too, as
this review with benchmarks at HotHardware reports. And where is
Prescott? Fortunately for AMD, it's a bit tardy to market and this will give this new Athlon 64 speed bin time to take a firm hold."

Processor makers 'bin' processors. That is, they try for the fastest speed, but if the chip doesn't make it, it get's 'binned' dowm the line and tried as a lower-speed chip. They can also 'bin' due to market-reasons (putting hi-grade chips in the low-speed bin because of demand, etc)

Then everyone is ignorant since none of us know everything. Any term applicable to all loses it's differentiating power and becomes useless, so I'm guessing the grandparent post meant "ignorant" in a more limited, less pleasant way.

"this will give this new Athlon 64 speed bin time to take a firm hold"
What's a speed bin?

In case you're not trolling, chip manufacturers crank out one design of chip, test it, then put them into bins based on how fast they can run reliably. They probably don't actually use plastic bins, but you get the idea.

Thus, a "speed bin" - a lot of chips designated to run at a certain speed, despite the fact that it's the same design and metal as a chip designated to run at a slower speed.

Due to minor process variations (random faults in materials, equipment, background radiation etc.) every manufactured chip is different. Some chips work fine at higher speeds, some chips only work properly at lower speeds, some chips fail to work at all. Since these microprocessors are at the cutting edge of silicon process technology, the variation matters. Now, to sort out which chip works at which clock speed, the manufacturer has to test every chip and classify them accordingly. Some are sold as 2 GHz chips, some are sold as 1.8 GHz chips, and so on. These different grades are called "speed bins".

I've ported the HotSpot VM to AMD64 for Blackdown. It's noticable faster the 32-bit version in allmost all benchmarks. The main reason for the performance gain is that you have more registers in 64-bit mode.

I expect they meant to use a tilda ('~') instead of a minus ('-'), so as to indicate "about" instead of "negative."

The best a heatsink can ever hope for is to cool to the ambient air temperature, and we won't see anything aproach that until we have superconducting heatsinks. (Imagine a large superconducting mass in the ground with a superconducting cable connecting it to the CPU to draw off heat: power outlets with a pin for cooling, superconducting traces on circuit boards for cooling, and no need for fans.)

Tilde \Til"de\, n. [Sp., fr. L. titulus a superscription, title,
token, sign. See {Title}, n.]
The accentual mark placed over n, and sometimes over l, in
Spanish words [thus, [~n], [~l]], indicating that, in
pronunciation, the sound of the following vowel is to be
preceded by that of the initial, or consonantal,

It is not uncommon to use a Peltier Effect cooler. This is basically a huge stack of thermocouples run in reverse. You put in a lot of current at low voltage, and it produces a temperature differential. If you heatsink the "hot" side to ambient air, the "cold" side may be well below zero. But, it is not very efficient (like all cooling systems), so you need to put in several times as many watts as it extracts from its "cold" side, and not surprisingly the total of both appears at the "hot" side, so you may need a very big heatsink with powerful fans.

I don't have the numbers in front of me right now, but at a guess you would need 300 watts to cool a 100 watt CPU, so would need to dissiapte 400 watts to air.

It is inadvisable to make any attempt to get the chip below zero, obviously ice formation will happen, and when you switch off, it will melt. Should you switch on again, disaster is quite probable, unless the PCB had a good conformal coating and the socket has an interfacial seal. The conformal coating can be dealt with quite easily, but I have never seen a sealed CPU socket. BTW I usually work as an avionics designer, where we have to make things that will run from well below zero to well above, so I do know the problems.

Another issue is thermal fatigue. The temperature coefficient of expansion of silicon does not exactly match that of the (probably epoxy) package, every temperature cycle causes a stress cycle, which causes a strain cycle, until something breaks. Same for the motherboard itself of course, if you should cool the whole thing. That is also a good reason to never overclock anything, apart from the possibility of getting subtle data errors and increasingly buggy OS as a result of inadequate timing margins, you will definitely wear the thing out a lot quicker. Every 10 deg C roughly halves the life, or the number of on/off cycles it will survive, and if you do the calculations, the numbers are quite depressing for a modern PC.

If you really want a thumping great 64 bit processor (I certainly do, when the price comes down!), it would be best to calculate the cooling system, and maybe do some tests with thermocouples etc, to try to get the CPU chip to settle down at a relatively safe temperature, say 40 deg C, without getting ice formation on the coldest parts. The clever bit would be to get it to power on and off without any excursions below room temperature (often 20 deg C) or above 40 deg C. Heat soak when you switch off the CPU would be minimal, the mass of the chip itself is very small, but cold soak from a huge peltier block could be a problem, the CPU could be dragged down to -40 deg when you switch off, which is exactly what you don't need, for a long and reliable life.

The other thing to watch out for is that at low temperature the CPU internals will be out of spec. It is actually possible to get excessive current flow in some transistors, and local hot spots, because it is too cold. There may also be timing problems, data corruption,...... It is not possible to test properly that these things are not happening. It takes a smallish time to fully exercise an 8-bit processor to verify all possible data and instruction operations, but somewhere near the lifetime of the universe to do the same for 16 bits. Throw in onboard cache, 64 bits, etc, and it is just impossible. These things work statistically, AMD know how timing variations, for example, might vary across the chip, and allow sufficient margin, within the published clock frequency and temperature range, but deviate from these ratings and this is no longer true. If running at 1GHz, a 1 in 10e12 error rate would corrupt your data or OS code within 1000 seconds, just over 1/4 hour. The error rate required to run an OS for weeks at a time at several GHz defies all attempts at testing. Again, a very good deterrent to overclocking (BTW my non-overclocked ancient K6-350 had a meltdown due to fan failure, and as it died the corrupt Win XP blew away all the passwords so when I got a new CPU and fan, any of th

There is the phenomenon called superfluidity, which is closely connected to "thermal superconductivity".Helium4 below 2.6 or so Kelvin becomes a superfluid. In this state, there cant be a thermal gradient anymore. Heat is transfered with the speed of sound.If you would use this to cool a cpu (doesnt work, because the doted SI becomes not so dotted at that temp), there wouldnt be any bubbles, only evaporation on the surface.

Well not that I'm buying one anytime soon, but it's nice to know that once I buy one, I'll get a Linux distro, that is compiled & optimized for a 64bit CPU. So for me only Mathematica will run in the 32bit (slower) mode. But Gimp, mplayer, video editing apps, hell even twm and xclock, will be compiled for 64bit CPUs.

I was wondering how is this going to be sorted out by application vendors on PCs? Are they going to release 64bit and 32bit versions? Is every CD going to contain both? What about 3rd party plugins? I've been asking the same question actually about Apple's G5, but www.apple.com (and I didn't search too carefuly) is bit short on nasty details like this. Is it really worth getting a 64bit machine without planning to use Linux?

Is it really worth getting a 64bit machine without planning to use Linux?

Well, if you want the current top of the line 32-bit performance, why not? That's a bit like asking "Should a get this Super Duty Dodge Ram with the best towing capacity available today, but also includes an extra cup holder I might never use." If it has what you need for a reasonable price, why question the extras you might never use. It's not like the 64-bit-ness is truly "wasted" just because you might not use it. Those extra

"Well, if you want the current top of the line 32-bit performance, why not?"

I haven't checked, but won't a P4 system give me better "speed per dollar"?

For me personally I couldn't care less about speed. With me the "weakest link" it's usually my brain or ocassionally the internet connection.

What I would care about more is a silent and small (think book sized) system. When I say _silent_ (not just almost silent), I mean that it won't need a CPU fan, no power source fan and that it would be based around a

Well, the Anandtech review has several charts showing price/performance ratios for different scenarios. In everyone one of them, the Athlon 64 3400+ and P4 2.8C take first and second places. So I would guess that, depending on your budget, either of these will give you the best "speed per dollar." Although, I would like to see some of the Althon XP's compared for reference. I would be willing to bet that the Athlon XP 2600+ 333MHz/512K Barton CPU for ~$95 would rank pretty high on that chart, simply bec

When I say _silent_ (not just almost silent), I mean that it won't need a CPU fan, no power source fan and that it would be based around a 1GB compact flash card. I would quite like a decent (not great) graphics card. And a 1 gig ethernet port.

You can get by without a CPU fan (see Via as others state). Good luck on the no PSU fan. And basing it on a CF card? With a 1 Gb ethernet port? Why? The card can't possibly keep up to the port (particularly for writes, which is another issue -- if you put any kind o

You are talking about price / performance -- or getting the most bang for the buck. Since the release of the original AMD 386 clone chip, AMD has had the best price / performance ration. IOW, the AMD chips will always give you the most speed per dollar spent on it.

As for wanting a super-cool, super-quiet CPU, that has decent performance, you'll want to check into the Transmeta TM5800 and TM5900 series chips, or the VIA C3 chips. Both of these can run without fans, and are clocked up around the 1GHz mark

Actually, an AthlonXP will give the best speed per dollar, since it gets more done in a clock cycle. It's actually pretty close between Athlon and Pentium, but if you add in the cost of the electricity over the life of the computer, the AthlonXP will win.

What I would care about more is a silent and small (think book sized) system. When I say _silent_ (not just almost silent), I mean that it won't need a CPU fan, no power source fan and that it would b

Code that has been optimized for the G5 by simple re-compilation will run without penalty on a G4. If you have done more in-depth, G5-specific tuning (levels 1, 2 and 3) then you will in all likelihood want to provide a separate binary. In extreme cases, you may decide that you need only offer one version of your software that runs on Power Mac G5 computers only. However, you'll probably want to support most or all of the Macintosh product line, which means that you need to decide how best to deliver the right code to each of your customers. There are several ways to achieve this; the first is:

Create different versions of your software for each processor that you support. This requires that you maintain three parallel code bases, something you may not want to do.

It is possible for your software to query the computer on which it is running to see which processor-related features are available. You can design your software to isolate processor-dependent code and call the appropriate version as needed. This leads to two additional strategies for packaging your application:

For every function that calls processor-dependent binary code, have your code call the appropriate version. If such functions are needed frequently, using this approach may decrease execution speed and make your source code (cluttered with if...then constructs) less readable.

Isolate processor-specific functions into frameworks or shared libraries, then have your software load the appropriate version when it starts up. This enables you to write your main code without wrapping function calls in if...then constructs."

Likely, vendors will only ship one version of their app, the 32-bit version, since AMD64 CPUs can run 32-bit and 64-bit. If the app doesn't need or greatly benefit from 64-bit, why bother with it? (The code will likely need porting to 64-bit anyway, especially by average Windows programmers who've never had to worry about different CPU architectures before.)

Apps that greatly benefit from 64-bit support may either be 64-bit only, or provide both versions. I recall reading that the UT developers plan on r

Unfortunately, it doesn't work that way. You can't compile and get just the extra registers. You have to take it all, which includes changes to things size changes of certain data types. The software will likely need modifications for this if the developers never intended to run on non-32-bit systems.

Actually, no. Most the the improvements are due to architectural improvements only available in 64-bit mode. They have little to do with the fact that integer registers are now 64-bit, but you don't get them in 32-bit mode anyhow. 64-bit mode on AMD64 should be about 20% faster than 32-bt mode, Mathematica is running in "the slower mode."

You fanboys just dont get it. 32 bit vs 64 bit has nothing to do with speed. The 32 bit mode that Mathematica runs in is not the "slower" mode. All of the performance increases in Athlon64 are due to architectural enhancements that are completely independant of the size of the registers...

disclaimer: I am no expert on CPU stuff, I just do a lot of math computations.

If he's actually doing any serious work in Mathematica then 64 bit does start to matter. High powered math is one of the areas where having

Looks pretty good. I still don't think there is a huge demand to have these in desktops as of yet. P4s are still very powerful and still compete with AMDs 64 bit chips. Even the Athlons are enough for most people to play the newest games and all.

I don't think that most people do the really computer intensive tasks that would benefit from 64bit chips plus the lack of truely 64 bit software that will give them this advantage is a hinderance as well.

I think it will be 2005 or maybe even 2006 before 64 bit chips become the standard.

"I don't think that most people do the really computer intensive tasks that would benefit from 64bit chips"

Everyone would benefit from switching because of the extra registers in 64-bit mode and the low-latency memory controller. Some people have said they got a 10-20% speedup just from recompiling in 64-bit mode without making any changes to their code.

Of course if all you do is run Word all day that will make little difference... but if all you do is run Word all day you'd probably be happy with a Penti

Everyone would benefit from switching because of the extra registers in 64-bit mode and the low-latency memory controller. Some people have said they got a 10-20% speedup just from recompiling in 64-bit mode without making any changes to their code.

Seriously though, how many "regular" computer users have access to the source code for their applications or would even know what to do with it if they did.

This is great for universities and research facilities that use either their own software or open so

"Seriously though, how many "regular" computer users have access to the source code for their applications or would even know what to do with it if they did."

Uh, that's irrelevant, as people will buy 64-bit versions of software for their 64-bit PC... and that software will run faster just because it's been recompiled with a 64-bit compiler that doesn't waste half the time copying data between registers and memory.

"But before it becomes really mainstream, you are going to have to have the 64-bit windows (n

Seriously though, how many "regular" computer users have access to the source code for their applications or would even know what to do with it if they did.

There is a vast horde of us Gentoo users who are laughing like hell at you right now.

This is the nicest thing about distributing applications as source. Who cares what architecture I run on, if it'll compile it'll be optimized and work on my local hardware! Itanium, x86-64, x86, any will do. I'll just 'emerge -ev world' and everything will rec

The reason AMD build a 64 bit chip. You have to understand the changes going on on the server markets. In short x86 caught up with proprietary architectures. Realize that it probably takes about $2 billion to develop a processor architecture (I'm probably low here, but you get the idea). Let's assume that 15 million server processors are sold annually. Sun ships about 300k servers a quarter and has 1/3 of the market, most of them are at the lower end (ie lots more 1-4 processor systems than 72 processo

On most of the roads in the nation, the speed limit is either 55MPH or 65MPH. Some places out West on the Interstates, it's 75MPH. Even a 100MPH speedometer is WAY overdesigned, well past short-term bursts for passing, accident avoidance, and the like.

So why do we have speedometers that go up so high, and why can many cars actually go that fast? After all, it's illegal, and we don't NEED that speed, or speedometer.

My point wasn't to be taken too literally. I was trying to say that most (but not all) of us will never drive over 100MPH. Most (but not all) of us have no need for a speedometer that goes to 120 or 160 MPH, but we all have them. That's not to say that our speedometers shouldn't have some margin above the top US speed limit of 75MPH, to debunk the Apollo analogy, a little. We need some margin, just not 100% margin. (It may be that Montana is b

Ugh. There used to be a law in the US that speedometers had to top off at 85 mph. My car ('93 Explorer) has that, and it's damnably unsafe. Most of the time, I have no fucking clue how fast I'm going, other than "as fast as the cars around me." I've taken to using a GPS on I-280, just to know if I'm going under 100 (at which point the cops might care, since I'm obstructing traffic going so slow) or not.

I still don't think there is a huge demand to have these in desktops as of yet.

for the casuah home ding-dong user? you are right.

for business and companiesthat depend on processing power? Ha! the amd 64's rock massively.

I replaced a dual Xeon box here at work that was the CGI station running Blender, povray and yafray and producing better graphics than a maya station next to it, and faster... with a dual Opteron using a 64bit compiled Gentoo install on it...

Then we recompiled the apps for 64 bit.

I am getting a 70% increase in rendering speed. I'm betting that with some optimization this could work even better... the Blender guys are working on that right now BTW...

a 64bit linux version of Maya? the company said "maybe 4Q 2004 for beta testing"

which is a shining example of why open source is the way to go.

businesses using the number crunching and processing power and are smart enoughto have embraced linux for the needs it can fill are all over AMD64 right now.

Have anyone tried to encode xvid with one of these in 32 and 64 bit, preferebly using Linux? Is there much difference in speed? I'm looking at the 3000+ part as it is cheap but there are zero and none benchmarks to back it up in 64 bit mode.

If they are comparing the $700 [newegg.com] AMD 64-FX chip, they should be comparing it to Pentium's $1000 [newegg.com] P4 3.2 EE chip, not their sub-$400 P4 3.2.

Also does anyone have an idea how expensive the AMD 3400+ chips are? Because the AMD 3200+ chips are $400 [newegg.com] retail. The article quoted a price for a thousand quantities but I was wondering how much it would cost for just one. Because if its pricey enough the P4 3.2 may beat out the 3400+ dollar for dollar.

Though Intel doesn't have to really worry about that title. At $164 [newegg.com] the Pentium P4C smokes the pants off any AMD processor in its price range. At least, after overclocking it to 3 GHz, which is very doable even with standard cooling.

"Though Intel doesn't have to really worry about that title. At $164 the Pentium P4C smokes the pants off any AMD processor in its price range. At least, after overclocking it to 3 GHz, which is very doable even with standard cooling."

Will it really be cheaper and faster when you have to buy a new one every 6-12 months because you destroy it?

Will it really be cheaper and faster when you have to buy a new one every 6-12 months because you destroy it?

Indeed the word "overclock" has become my hardware review spam filter; it has a strong "cold fusion" connotation. If I drop $700 on a CPU, I will not be running it out of spec in any way. If I'm that hungry for speed, I'll build a cluster.

I can understand people wanting to overclock, say, a P-III 933 to see how far they can push it, but I just don't get the fanboy fascination with extreme cooling, adding a few megahertz, etc. Reading this stuff in a tech article is like finding an article on adding a whaletail to a ricer in Car&Driver - it just doesn't belong in a serious text.

You are obviously not a hardware enthusiast (this may sound like a flame but i just feel too strongly about this)... You can easily overclock for economical reasons without sacrificing cpu life. I've done this for years and continue to do so. As long as you know what your doing and have proper data from research (from reviews like this)you can take a sub $100 dollar processor and have it run like one for over $400 for years). The best part is that if you have the right components and you know how to tune

It's nice that you design hardware but from the sound of it you don't work for intel, amd, or any of the relevant processor and chipset companies. Also you don't seem to be aware of many of these companies practices of speed binning processors (or whatever it's called nowadays) in which cores capable of higher speeds are packaged in as lower speed processors to satisfy demand of a certain speed/price range.

Next you talk about games and windows crashing... Well, all of my systems are thoroughly tested (in

BTW, my cooling wasn't extreme... just a very large heatsink with good a large surface area and a 90mm fan (the larger the fan the lesser the rpms needed to move air and the lower the noise). Simply air cooling not water, peltier, or phase change... 50 % faster about 15 degrees cooler then most processors running on spec (the heatsinks amd ships are extremely weak... this is one thing intel has them totally beat at) as far as amd goes.

So retail using your $400/3200+ as mark-up ratio, that should be something like $1040, $600 and $400, respectively.

Also, on the P4/Athlon war I haven't checked lately since I'm happy with the XP2000+ I have, but at the price range I've been at AMD has come out on top for my last three processors (Duron 700, Athlon 1200 and the above mentioned XP2000+). Maybe the P4C is different, right now I really don't care though:)

Any peice of hardware that can spank the competition EVEN while its potential isn't fully being realized by the software testing it deserves my dollar.And yes, im talking about how well it games, I can really give a flying fsck about how quickly it runs office...

<sarcasm>Well, in all my testing of running 16 bit apps, a Pentium I outran a similarly clocked P4 by a healthy margin - so obviously the Pentium is a better chip, right?</sarcasm>

Seriously - For a period of time the A64 will be running mostly 32 bit apps (at least in the Windows world), and so it is fair to benchmark its performance against 32 bit apps. But I cannot help but wonder how much P4 tweaking all those apps had, and how much A64 tweaking they did not have.

Also, the memory performance tests are, to my mind, somewhat questionable as well, as different CPUs even within the Pentium line have different memory access behavior - code that will be bus limited on a P4 might not be bus limited on a P3.

I am not saying the comparisons are not useful, but I am saying that they don't tell the whole story. Let us see some benchmarks wherein the A64 is running code that is written for the A64 - using the extra registers and so on.

The reviews are all the same--run various permutations of the PC through benchmarks, and display the results using bar charts. And not just any bar charts. Use a gradient to color the bar, so that the color legend is rendered useless.

The reviewers should read Tufte, and figure out a more effective way of illustrating their analyses than endless pages of bar charts. Oh wait, that's how they get their ad revenue. Never mind.

Because a good majority of IT professionals don't have time to deal with instability, testing, high-performance cooling, or nonsense like that. In the Real World, I need a machine that can render reliably on a daily basis. Overclocking is fun, at home, as a hobby. In the office from 9 to 5, machines need to come out of the box and "just work".

You are missing a little bit. There's a reason why AMD started going with giving their processors numbers instead of clock ratings, mainly because clock ratings are starting to mean less and less.

Clock ratio is only one of the things that is indicative of a CPU's power... you might want to consider comparing the 64 and 32 bit varieties of this chip somewhat akin to comparing a 1Ghz P4 Vs an AMD, or a 266Mhz P1 vs a P2-266.

I'm in the market for a new Mobo/CPU, to upgrade from an XP 2600/333. It looks good and appears to demand a bit less power, however, after all the trouble I've had with my present mobo (Asus A7V8X) I'm still iffy on dropping the cash. Processors always look good, mobos (now that I'm a bit cynical) all look like dressed up used cars you don't want to look under the hood of.

Recommendations on a good solid board for one of these? (I don't have money to go out and buy new boards and stack them up as dust c

You're forgetting that AMD has a variety of different speed chips out on the market. Some people just want the fastest chip they can buy, and will pay whatever it costs to get it. Those people are the reason chips cost as much as they do when they come out.

Most people however simply have a rough estimate of what they want to spend on a computer, and buy the best they can afford for that price. They're just as happy with a 3000 or 3200 instead of a 3400.

For starters, your friend was undercutting every other X-Box retailer in the country by doing this. He wasn't selling 10 per day because the price was lower--he was selling them because they were cheaper than everybody else's price. Imagine if most retailers had followed his lead and lowered their price to $289 a box, also. He would have been selling the standard 3 per day, same as everybody else. So no, this ISN'T a good example of what you were trying to prove.