I know that...long before TWKR chips were made, I was asking AMD reps directly for high-leakage chips for OC'ers..the chips they'd normally bin as useless, which was EXACTLY what AMD did with the TWKR chips. Which is why I say that the "TWKR" chip was my idea...

I "got" all this stuff long ago...as far as I understand, it's just something about silicon in general, and not really anything new. Where those critical points are has changed over time, but the general nature of what this behavior is, has not.

I mean really, bin a bunch of chips under LN2, no matter the chip ,and this trend emerges, which is where I got the idea from. For all I know, could be some board thing, memory...some PLL..I don't have a clue, really.

It's worth noting that BF3 is also probably the most heavily threaded game on the market. If I recall it's optimized for up to 10 or 12 threads--leaning towards 12 since that would make it ideal for really any CPU on the market (6 core Intel's with HTing).

You are so right about this. I just installed my AMD HD 7950 this afternoon. Testing a lot faster than my previous 3D Mark scores. Frame rates are significantly higher also. It's a good deal now. AMD has a promotion where you get 3 or quality games free for download. Crysis 3 and 2 or 3 others. I already sold my original HD 6950 card and am about to unoad the 2 HD 6970's I had bought before I reconsidered the Crossfire solution. Tomorrow my platinum certified psu will arrive. Hoping that resolves the few blue screens I get that I can't determine their source.

Yes! That is why BF3 deserved more appreciation than it got. The PC version was coded by a separate team and optimized to take advantage of high performance multi-core processors not present on Consoles, as well as use exclusively DX10/11.

The Cell I don't believe was clocked at 3.2GHz, I know the Tri-Core from the 360 was.

EDIT: Apparently it was 3.2GHz, but at least one of those cores was disabled at all times, and the architecture and development tools made it nearly impossible to optimize code properly--not to mention the terrible memory allocation.

The Cell I don't believe was clocked at 3.2GHz, I know the Tri-Core from the 360 was.

EDIT: Apparently it was 3.2GHz, but at least one of those cores was disabled at all times, and the architecture and development tools made it nearly impossible to optimize code properly--not to mention the terrible memory allocation.

isn't the natural comparison the 3570k ?...i.e. The intel cpu closest in price..the cpu that also appeared in the review comparison.

For more impact i'd go with:"the intel 3960x costs nearly five times as much, but doesn't offer five times the performance"
still trudging through the other reviews. Seems like some reviewers didn't get a lot of time to do the reviews.

we all see the reviews but here is a fact ( lga 1155 = 1155 pin connectors the latest the lga 2012 = 2012 pin connectors that is there top range prossesor sockett, so we will jump strait into the facts lga 1156 then 1155 now 2012 here is what amd will have 10 core opteron already 12 core with 12 dim rows per cpu so what is the cpu and the next step in amd's bag, 1 is that pile driver is on a sockett with under 1000 pins so wher am i going with this (the fact is that the amd am3+ sockett is so under crunched that it can no longer compare speed tests in the new intel socketts as there is not nearly enough pins to transfer the data from main board to cpu etc all intel cpus have more than 150 more pins that amd's am3+ so its going faster fact, but g 34 g35 magny is hear and is in servers all over the world, so we will be crunching it like this , magny sockett g34 = almost 3000 pin connects the cpu is massive that means on a new 22nm they can smack in over 30 cores on one cpu. Dont get me wrong i like both products but all the sockett changes that intel have made have upset lots of my clients, and end of life seems to soon, where amd no problems , the problem i have is that when amd starts pumping out g34 with a 10 core therbane mixed bulldozer and merged into a piledriver that has 2980 pin connectors then we can acutual take mesure , but for now yes amd is keeping every thing very quiet like intel did when athlon x2 was the king and amd had a very long party and intel pumped out core 2 duo . In my personal feeling about this ,is that amd have the fastest cpu just not in the sockett it needs to be inn.

we all see the reviews but here is a fact ( lga 1155 = 1155 pin connectors the latest the lga 2012 = 2012 pin connectors that is there top range prossesor sockett, so we will jump strait into the facts lga 1156 then 1155 now 2012 here is what amd will have 10 core opteron already 12 core with 12 dim rows per cpu so what is the cpu and the next step in amd's bag, 1 is that pile driver is on a sockett with under 1000 pins so wher am i going with this (the fact is that the amd am3+ sockett is so under crunched that it can no longer compare speed tests in the new intel socketts as there is not nearly enough pins to transfer the data from main board to cpu etc all intel cpus have more than 150 more pins that amd's am3+ so its going faster fact, but g 34 g35 magny is hear and is in servers all over the world, so we will be crunching it like this , magny sockett g34 = almost 3000 pin connects the cpu is massive that means on a new 22nm they can smack in over 30 cores on one cpu. Dont get me wrong snip In my personal feeling about this ,is that amd have the fastest cpu just not in the sockett it needs to be inn.

Thankyou[/QUOTE

So amd is like a dragster trying to drift........... Yeah I get it. Im impressed they put out a nice product and all but seriously ... Even u can hear it...tick tock...tick.......its coming so enjoy
Youve got about ten minutes left

we all see the reviews but here is a fact ( lga 1155 = 1155 pin connectors the latest the lga 2012 = 2012 pin connectors that is there top range prossesor sockett, so we will jump strait into the facts lga 1156 then 1155 now 2012 here is what amd will have 10 core opteron already 12 core with 12 dim rows per cpu so what is the cpu and the next step in amd's bag, 1 is that pile driver is on a sockett with under 1000 pins so wher am i going with this (the fact is that the amd am3+ sockett is so under crunched that it can no longer compare speed tests in the new intel socketts as there is not nearly enough pins to transfer the data from main board to cpu etc all intel cpus have more than 150 more pins that amd's am3+ so its going faster fact, but g 34 g35 magny is hear and is in servers all over the world, so we will be crunching it like this , magny sockett g34 = almost 3000 pin connects the cpu is massive that means on a new 22nm they can smack in over 30 cores on one cpu. Dont get me wrong i like both products but all the sockett changes that intel have made have upset lots of my clients, and end of life seems to soon, where amd no problems , the problem i have is that when amd starts pumping out g34 with a 10 core therbane mixed bulldozer and merged into a piledriver that has 2980 pin connectors then we can acutual take mesure , but for now yes amd is keeping every thing very quiet like intel did when athlon x2 was the king and amd had a very long party and intel pumped out core 2 duo . In my personal feeling about this ,is that amd have the fastest cpu just not in the sockett it needs to be inn.

First of all please proof read your post. LGA 2011 is Intel's top end socket. The reason for the extra pinouts is the onboard PCI-E controller and has next to nothing to do with actual core logic or speed. G34 only has 1944 pads on the CPU and 1974 on the socket itself. So yet again no you are incorrect it does not have 3K+ pins. It has less than Intel's top socket. All of that with dual CPU dies, a multi-CPU interconnect and quad channel memory.

LGA 1156 was replaced by LGA 1155 which is being replaced by LGA 1150 obviously more pins is not better.

So amd is like a dragster trying to drift........... Yeah I get it. Im impressed they put out a nice product and all but seriously ... Even u can hear it...tick tock...tick.......its coming so enjoy
Youve got about ten minutes left

All ASUS AMD 9-series chipset motherboards have UEFI. Quite a few MSI, Biostar, and ASRock motherboards (entry-thru-performance) have it as well. It's just Gigabyte's 9-series boards that stick to ye olde AwardBIOS. They do feature "HybridEFI" if you want to boot from large volumes, though.

You mean, it's just Gigabyte boards that stick to the simple, easy to use, no learning required, tried and true, reliable bios. Yeah, they should really be faulted for that. After all, when you want to change your bios settings, there's such a massive improvement when your mouse is enabled, even though you still have to type the numbers in the field most of the time.

Whatever. Next you'll want to be able to hook up a mouse to your wristwatch to change the time because the buttons are too difficult to use.

Yes, because switching from the BIOS system--developed in 1979--to the much more up to date UEFI is only for the sake of using a mouse in the menus. UEFI makes huge improvements that were long over due. Being able to use a mouse is kind of just icing on the cake, and most people probably still use their keyboards anyway. BIOS sucks, simple fact. It's old, it's outdated, it was never intended to survive this long in the first place, and there are substantially better alternatives.

Yes, because switching from the BIOS system--developed in 1979--to the much more up to date UEFI is only for the sake of using a mouse in the menus. UEFI makes huge improvements that were long over due. Being able to use a mouse is kind of just icing on the cake, and most people probably still use their keyboards anyway. BIOS sucks, simple fact. It's old, it's outdated, it was never intended to survive this long in the first place, and there are substantially better alternatives.

How you define cores and threads is very arbitrary. X86 cores are unusual in that it's been so clear-cut for so long. A GTX 680, for example, could be considered as a 1, 8 or 1536 core processor. The PS3 is closer to what you'd normally consider an 8-core than it is to what you'd normally consider a single-core.

How you define cores and threads is very arbitrary. X86 cores are unusual in that it's been so clear-cut for so long. A GTX 680, for example, could be considered as a 1, 8 or 1536 core processor. The PS3 is closer to what you'd normally consider an 8-core than it is to what you'd normally consider a single-core.

The PlayStation 3 uses the Sony, Toshiba, IBM-designed Cell microprocessor as its CPU, which is made up of one 3.2 GHz PowerPC-based "Power Processing Element" (PPE) and eight Synergistic Processing Elements (SPEs).The eighth SPE is disabled to improve chip yields.Only six of the seven SPEs are accessible to developers as the seventh SPE is reserved by the console's operating system

How you define cores and threads is very arbitrary. X86 cores are unusual in that it's been so clear-cut for so long. A GTX 680, for example, could be considered as a 1, 8 or 1536 core processor. The PS3 is closer to what you'd normally consider an 8-core than it is to what you'd normally consider a single-core.

Then how do you explain the linear scaling of compute power to each core? Hyper-threading doesn't enable the extra threads to scale as well as having "real cores" so the benefit is highly variable and doesn't always provide an extra core worth of compute. If you look at what happened to AMD processors, single threaded applications took a hit, but multi-threaded applications that use it worked very well. Consider for a moment the performance boost in multi-threaded applications as AMD optimizes the core, shrinks the die, and crams more cores on the die. By sharing the floating point unit (keep in mind that a lot of FP-intensive applications are starting to get programmed on GPUs now, not a lot, but they're cropping up) AMD can optimize what the CPU needs to be good at. A lot more integer math gets done in a processor than floating point math for the average user and generally speaking unless you're doing a lot of parallel floating point operations, you won't take a huge performance hit because at that point you should be considering OpenCL for large amounts of data and I think AMD's is hoping that you will get or use an AMD GPU to improve your floating point performance because there are huge benefits to be had when you can make your FP code run in parallel. Obviously the industry isn't there yet, but it will be before you know it. Also consider all the optimizations that this architecture could use, it is new and AMD needs time to work out the bugs. All things considered I think they're doing the best they can against Intel considering revenue and usable income.