Post Your Comment

63 Comments

Looking at PURELY gaming purposes. Faster CPU is ABSOLUTELY USELESS. Face it, majority of the gamers that are considered hardcore doesn't run 2560x1600 resolutions, and it looks like at the resolutions they are running at, 1280x1024 or 1600x1200, it seems getting a faster CPU is worth it.

Nope. Why get a faster CPU when you already get 100+ frames per second?? What a video card enables is get higher quality graphics WITHOUT losing performance compared to the lower performing video cards. A CPU does what?? A absolutely useless 20 additional frames onto the already more than enough 100+.

Now we still see reviewers benching systems with 4xAA and 16xAF at ever higher(and ridiculously unreal) resolutions. But we know latest video cards allows super high AA features too. 1600x1200 with 16xAA 64xAF(Ok I forgot what the highest AF was, cut me some slack ok?? Way too many features in new video cards, lost track around X800 time...) sounds pretty good. Reply

So? That's AMD problem they can't afford to put 4MB of LV2 on their processors. The comparison is what it is processors at equal price points. They only had the 5000+ on hand, so it was compared to the closest processor in price.

The 4600+ was only used as it is the highest 65W TDP processor on the 90nm node. It was probably used to see if the 65W TDP was justified on the 5000+ 65nm.

I don't know about you'ins. But I give a shit less about Intel's or AMD's or Macy's marketing prowess. I do my own research from my own sources and make decisions based on that. What's an ad? What's a TV commercial? Only the clueless and the lazy need those to guide them in what products to buy or even if the product exists. I don't give a shit about marketing. Reply

Nice price comparison... maybe you should also add that for the same future set you get on the mobo there is a price difference of 1/3 in favor of AMD! So put a e6400 in line of this comparison then you have the same price tag!

Well we all know oc'ing c2d is for noobs.... you don't need any knowledge...(see it as good or as bad whatever you want)

but oc'ing AMD does require knowledge.
Htt link is on 1125, what do you expect to reach of an oc? on 90nm parts most of the htt link knocked out around 1060-1080, so maybe try to make a decent oc...

Sorry I should have addressed this in the review - HTT speed had no impact on my overclocking results with this particular chip; even with a HT multiplier of 4X the chip won't get into Windows at 2.99GHz. Memory wasn't a factor as it was set to the lowest speed possible in order to see how far we could push the CPU.

I have a feeling that with better air cooling close to 3GHz may be possible, but I wanted to look at the worst case scenario overclocking potential of just using a stock heatsink/fan similar to what we did in our Core 2 overclocking article upon its release.

I'm working on the Brisbane 4800+ now and will find out soon if it overclocks any better.

OC'ing AMD is pretty simple as well. Just drop the HTT multiplier to 4X if you get above about 220 MHz - not that the total HTT speed generally matters. You can also drop memory dividers if you have less capable RAM. I still think 775 overclocking requires a bit more knowledge/effort - a bit, I said, not a lot! I can't see anyone but an elitist thinking easier OC'ing would be a bad thing, though. Reply

why is 775 oc harder, it has one factor less to keep in mind and then i won't speak about the memory options....

I've used already a 6600 to play with on an asus board, just left everything to auto, put the fsb to 400 and hey it runned stable on 3600 (depending on your memory offcourse)? WTF you could say on stock vcore? nope the board increased the vcore automatically.... it was running on 1,56 auto :)

Will you update this review due to the htt? or will you leave as lots of you're reviews (like the woodrcest testing) saying that some more test are comming but eventually nothing changes.... people actually read your reviews and believe them by hard...so accurate review is always welcome.... Reply

Mr. Anand, is it possible that you use Asus's Crosshair motherboard if attempting for the max overclocking of these 65nm's? It's only fair when you use top Intel board but leave out top AM2 board. I have an understanding that Asus's Crosshair board is ~ 15%-17% better performer than other boards. Also I've heard that the DFI board is a great Overclocker and you have used it on the s939 reviews. I would appreciate it if you use either board, but preferably the Asus Crosshair.Thanks. Reply

Unfortunately I don't have either of those boards here for testing, but I'm sure I can persuade either Gary or Wes to do a follow-up with a more serious look at 65nm overclocking once I'm done with the power analysis on these chips. :)

Anyone else wondering what this means for the new generation of Turion X2s?

I know Santa Rosa is coming out either Q1 or Q2 this year; it's supposed to support an 800 or 400mhz FSB, depending on system load, which should drop power consumption a bit. But, as I understand it, the real battery suckers are CPU, display, and HDD. (Yeah throw in GPU too if you have discrete graphics.) But where does that leave the chipset, will Santa Rosa really do much for battery life?

If not, AMD could make serious inroads into the laptop segment. 65W is hot for a laptop, but if they can drop their desktop TDPs by about a 1/3 or 1/4, why can't they do the same for their laptop chips? Reply

FYI even 90nm Turions consume LESS power than Merom. (Merom is more power hungry than Yonah).

Also RS690M is about to rule the integrated market (along with RS700M for C2D). In other word AMD is gonna rule the chipset market for both platforms while beeing pretty competitive in CPU's, especially for bussiness use.
(for bussiness the features and battery life is what counts, not the absolute performance) Reply

Shows power consumption to be as near as identical between the two processors.

Not sure if you are comparing Turion or Turion X2 to Merome but aside from mismatched comparisons (such as comparing the power consumption of a Turion system with onboard graphics and Merom with dedicated graphics) I've not seen like for like tests showing Turions to be more power efficient:

"Does that make the Core 2 Duo worse at power saving than Turion X2? Without equivalent setups (i.e. both using IGP or both using discrete GPUs), we can't say for certain. We can say that an ASUS W5F with a T2300 chip (1.67GHz 2MB cache) that we had at one point bottomed out at 19W in idle mode, so Core Duo and Turion X2 appear close in low power states, with Turion X2 perhaps holding a slight 1-2W advantage. Our testing of Core Duo vs. Core 2 Duo showed the CPUs to be nearly equal in power draw, so it appears AMD is equal or slightly better than Intel at minimum power draw. At maximum power draw by the CPU, Turion X2 is definitely using more power than Core 2 Duo, as even with higher performance/power components the ASUS A8JS still uses less power than the MSI TL-60 at 100% CPU load."

Darn, was so hoping AMD 65nm would give an easy 3.3ghz+ like the intel chips =(. This chip may not reflect overall OC, but it damn hovering around my AMD range of 2.6-2.8 again. A 2.66-2.7 OC intel is > then 2.9ghz AMD. Ill just waited again. Reply

Now that both companies claim to offer "platforms" it'd be interesting to see an Intel 965 board vs. an ATI board being used in these benches. Not sure that the NVIDIA models were the best choice here, especially since their not apples to apples on generation and power consumption. Reply

im quite shocked to see the gaming performance advantage @ 1600x1200!!!! is'nt the cpu removed as a performance bottle neck @ such a high resolution? its all about the gpu horsepower @ that resolution no? can't believe a cpu can show a 25fps diff @ 1600x1200?!?!? Reply

quote:im quite shocked to see the gaming performance advantage @ 1600x1200!!!! is'nt the cpu removed as a performance bottle neck @ such a high resolution? its all about the gpu horsepower @ that resolution no? can't believe a cpu can show a 25fps diff @ 1600x1200?!?!?

Some more research here is in order. The G80 GPU, commonly known as the 8800 GTX or GTS on the market, has taken graphics performance to a new level. All of the reviews that used an AMD FX-60 or 62 CPU clearly showed throttling back to the CPU in many if not most cases, only at the highest possible resolutions/AA + FSAA did the scaling turn back on with an FX-60. The X6800 released the full potential of this card.

The difference in framerate you see bettweent he 5000+ and the E6600 is that the E6600 is has pushed the bottleneck further ahead -- simply because the E6600 is a better gaming CPU.

Tom's did a good article titled: The 8800 Needs the Fasted CPU.
In essense, even for a single card solution, an AMD CPU is not a good match for this card. Reply

This is why back at the Core 2 Duo launch we talked about CPU performance using games running at 1280x1024 0xAA. When a faster GPU comes out and shifts the bottleneck to the CPU (as has happened with the 8800 GTX), saying "Athlon X2 and Core 2 Duo are equal when it comes to gaming" is a complete fabrication. It's not unreasonable to think that there will be other games where that ~25% performance difference means you can enable high quality mode. Company of Heroes is another good example of a game that can chug on X2 chips at times, even with high-end GPUs. Reply

How's a chip that uses less power run hotter? On the last page the 65nm X2 5000+ hit 51C under load, lower than any other chip, but it uses more power than any chip except the 90nm X2 5000+. How's that work? Reply

Not all the power that goes into a chip is released as heat. The heat is basically wasted power that "leaks." So if a chip can get more useful work out of the same amount of power then the amount of heat released would decrease even while power consumption remained steady.

I'm not an expert, but I believe a lot of the special new process techniques we always here about (like strained silicon) basically just reduce the amount of wasted energy. Am I right here? Reply

If the processor released all of its energy in heat, it'd be the world's most efficient space heater.

You have forgotten a few little points, like that work is done by a processor (wouldn't be much point otherwise). Transistors are switched, mostly, however the IC itself can expand and contract, as well as the packaging material. The heat generated by a CPU is because of the resistance inherent to the circuits. All of the above is considered energy expended (or, more properly, changed in state).

In other words, don't go around insulting people's intelligence when you don't know yourself what you're on about. Reply

I'm betting only one of us has an Electrical Engineering degree, and guess what... You're not the one with it.

The work being done by the CPU is what makes the heat. The transistors themselves create heat because they consume power, and a lot of it, to switch from one state to another at high speeds.

I will say it again since you still don't get it, though it probably won't help. Energy is conserved. Electrical energy goes in, and heat comes out. The thermal expansion and contraction of the part isn't work. It's a side effect of the heat being product when the transistors consume electrical power by switching and make heat. Reply

I think everyone here knows that, the issue is that current -> heat is not the only type of transformation that can occur. If it was then anything electric wouldn't be able to do anything at all except create heat, and obviously that isn't true. Reply

The problem in this discussion is most participants do not know what heat, electricity, work(i.e. in joule), work(i.e. inlogical one) means.

Except for electromagnetic waves that leak badly shielded PC and the energy required to transfer the information out of the PC (by monitor and network cables) the energy(in form of electricity) consumed by the CP is completely changed to heat.

In other words "focused" form of energy witch low entropy(electricity) is distributed to the environment and becomes an "unfocused" form of enegry, mostly heat. Also even the heat dispersed to the room is still partly focused in sense it still not spread to the whole universe.

If you can see something glowing, then at least part of the energy is producing visible light and not heat. Although now that you mention it, I think I heard that plain old light bulbs are fairly efficient heaters. Reply

quote:The entire semiconductor industry is struggling with the heat of chips increasing exponentially as the number of transistors increase exponentially. Moving to new high-k materials that control leakage is one step of many towards making transistors run cooler. Because high-k gate dielectrics can be several times thicker, they reduce gate leakage by over 100 times, and therefore devices run cooler.

This implies what I was saying, but perhaps the devices only run cooler because they require less power to begin with? Reply

Heat density: less power in a smaller area can potentially run hotter (witness Prescott vs. Northwood P4). Except we're seeing the reverse here, so probably it's just a difference in chip/package design. There's no guarantee that the various chips are measured identically, meaning AMD could have changed temperature reporting with 65nm, and certainly the AMD vs. Intel numbers are not a direct comparison. I would put more weight on power numbers, personally. Reply

In future Brisbane reviews could you check the Brisbane idle temperature to see if it appears to be somewhat accurate? Other previews and leaks show Brisbane idle temperatures in the 10C-15C range which is well below room temperature. Idle and load temps of Brisbane may actually be 15C higher than what is reported. Reply

If anything the die shrink should make the Brisbane run slightly hotter since the die is a little smaller. I can't come up with any good reason why one 65W processor runs cooler than another 65W processor given the same cooler and same size heat spreader. Maybe the heatspreaders aren't flat between all the AMD CPUs. The 35W AM2 processor definitely should have run cooler. Reply

"As you can expect, AMD is pricing the 65nm chips in line with its 90nm offerings to encourage the transition. Die size and TDP have both gone down to 147 mm^2 and 65W across the line.

Is 147 mm^2 accurate? That happens to be the same die size of 90-nm A64 X2 Manchester, and isn't much of a shrink from the current 183mm^2 512KBx2 Windsor cores. Some rumors had put it at ~125 mm^2. Reply

quote:Strange how Anandtech only started including it when Intel started marketing it insanely.

No... This was one of the major topics of discussion in the A64 v. P4 days as well, only AMD was enjoying Intel's current position. So Intel has a better chip right now, whats the big deal? This is why we like compitition. Reply

We didnt use to care about performance\watt in the consumer sector and left that to the professional arena. But ever since the power draw and heat of the consumer boxes started to rival professional level boxes it is good to see what you are getting.

Gone are the days of a 250 watt PSU and a system that is passively cooled.

Seems to me performance per watt is AMD's only leg to stand on right now (unless you just want to go with pure power draw). If you just look at performance AMD is even farther behind and that won't change until we see K8L, and maybe even that won't be enough. It's good to see lower temps and power requirements from AMD as it shows they're at least doing something other than sitting around bemoaning their fate. Intel already wins the marketing war, and now they're killing AMD on performance as well.

Would have liked to see more CPUs for the comparison, though - like how about a 65nm 4600+ and 3800+ to see how they match up with the EE/EE SFF chips? Getting an E6300/6400 in as well would ahve been nice. Last, the use of an 8800 GTX is going to skew the "performance per watt" numbers, but it should still be consistent in this one set of benchmarks. As long as we don't get comparisons using other GPUs thrown in, it's probably the best you can do. Still would be nice to know how much power the CPU is really drawing on all these tests - probably less than 50W would be my bet. Reply

Intel won WHAT marketing war ? More like they won a battle, after losing many to their competitor, for around two years. Also, unless something has changed recently that I'm unaware of, AMD also leads in memory bandwidth, not that THAT much memory bandwidth translates into real world performance.

I don't know which platform you like, and to be honest, I don't really care. However it would behoove us all, to realize that AMD, and Intel are both good companies, and if one were to go missing, we ALL would be in a much worse "market". Reply

Right now, I prefer Intel. Oddly enough, though, I'm still running AMD Opteron 165 overclocked to 2.6 GHz, because it's fast enough. Better? Nope, but good enough. So I preferred AMD when they were ahead.

My point about marketing is that Intel has always had the lead there - ALWAYS! How many AMD commercials do you see? A few, maybe. Intel floods the airwaves with adverts. Being better at marketing doesn't make your products superior, but if you own the mindshare of the common man it can make it damn hard for others to compete. Hell, with the junk that was Prescott (and the later NetBurst stuff), the fact that most corporations were still buying Intel says something, doesn't it? Yes, AMD made a lot of gains, but with Intel back in the lead those gains are eroding fast.

I for one hope K8L kicks some serious ass. HOPE. I remain unconvinced that AMD can close the gap and then some in the next 6 months. We shall see. Let's just hope it's not more of this Quad FX craziness. If I wanted two dual core AMD CPUs, I think I probably would have bought an Opteron 2xx setup a while back. Unbuffered RAM might improve performance a bit (5% or so?) but not enough to matter. As for octal core systems (dual quad core), don't make me laugh. That's great for the server market, and maybe even high end workstations, but on the desktop we're years away from being able to utilize/need that much CPU power. Reply

Well, better a double post than to lose all that I typed. It looked like the post didn't go through, I luckily had it copied first, and after three refreshes I posted again. Naturally at that point both posts show up. Reply

I don't know what current Intel CPUs memory bandwidth is, but at the time I checked my AM2 3800+ (single core) the closest thing was a core 2 duo at around 8-10GB/s bandwidth. After dropping the multiplier on my AM2 CPU, and increasing the "FSB" to 250MHZ, memory bandwidth on my 3800+ AM2 was over 11GB/s. I didn't test before hand, well maybe I did, but I don't recall what it was, so . . . I may have posted somewhere on the anandtech forums. Anyhow, I'm pretty sure its much more than 5% differences between the two, but like I said, it does not translate into real world performance, so it's pretty much moot. Reply

Right now, I prefer Intel. Oddly enough, though, I'm still running AMD Opteron 165 overclocked to 2.6 GHz, because it's fast enough. Better? Nope, but good enough. So I preferred AMD when they were ahead.

My point about marketing is that Intel has always had the lead there - ALWAYS! How many AMD commercials do you see? A few, maybe. Intel floods the airwaves with adverts. Being better at marketing doesn't make your products superior, but if you own the mindshare of the common man it can make it damn hard for others to compete. Hell, with the junk that was Prescott (and the later NetBurst stuff), the fact that most corporations were still buying Intel says something, doesn't it? Yes, AMD made a lot of gains, but with Intel back in the lead those gains are eroding fast.

I for one hope K8L kicks some serious ass. HOPE. I remain unconvinced that AMD can close the gap and then some in the next 6 months. We shall see. Let's just hope it's not more of this Quad FX craziness. If I wanted two dual core AMD CPUs, I think I probably would have bought an Opteron 2xx setup a while back. Unbuffered RAM might improve performance a bit (5% or so?) but not enough to matter. As for octal core systems (dual quad core), don't make me laugh. That's great for the server market, and maybe even high end workstations, but on the desktop we're years away from being able to utilize/need that much CPU power. Reply

Yes, I would have appreciated a lower end Core 2 Duo that is more comparable performance-wise as well as the 6600 which matches it's price.

Basically, it looks like the new process is only a bit better than the old energy efficient chips, but is clocked higher and will be sold cheaper. The important thing for AMD is probably to get their 65nm process ramped up and have all the bugs ironed out for a good K8L launch. Reply