Posted
by
Soulskillon Saturday September 03, 2011 @06:30PM
from the make-it-so dept.

MojoKid writes "AMD and Intel are both preparing to launch new CPU architectures between now and the end of the year, but rumors have surfaced that suggest the two companies may delay their product introductions, albeit for different reasons. Various unnamed PC manufacturers have apparently reported that Intel may push back the introduction of its Ivy Bridge processor from the end of 2011 to late Q1/early Q2 2012. Meanwhile, on the other side of the CPU pasture, there are rumors that AMD's Bulldozer might slip once again. Apparently AMD hasn't officially confirmed that it shipped its upcoming server-class Bulldozer products for revenue during August. This is possible, but seems somewhat unlikely. The CPU's anticipated launch date is close enough that the company should already know if it can launch the product."

There might be good reasons on both sides, but the tinfoil hatter in me believes this might have more to do with fact that both companies might want to see a little more profit out of the R&D that went into the current generation of products before obsoleting them. The performance of the current generation is high enough that it is getting harder to introduce a new generation at a price point that could both recover R&D and provide reasonable value for the customer.

The current situation is Intel is slaughtering AMD. AMD hasn't had an architecture update in a long, long time and it is hurting them. Clock for clock their current architecture is a bit behind the Core 2 series, which is now two full generations out of date. Their 6 core CPU does not keep up with Intel's 4 core i7-900 series CPU, even on apps that can actually use all 6 cores (which are rare). Then you take the i5/7-2000 series (Sandy Bridge) which are a good bit faster per clock than the old ones and there is just no comparison.

On top of that, Intel is a node ahead in terms of fabrication. All Sandy Bridge chips, and many older ones, are on 32nm. AMD is 45nm at best currently. Not only does that equal more performance but it equals lower heat for the performance, particularly for laptops. Then of course Intel is talking about Ivy Bridge, which is 22nm, another node ahead. Their 22nm plant is working and they've demonstrated test silicon so it will happen fairly soon.

The situation is not good for AMD. All they've got is the low end and that is getting squeezed hard by Intel too. They need a more efficient CPU and they need it badly. Delaying is not something they want to do, Bulldozer has been fraught with delays as it is. They've been talking about it for a long time, like since 2009, and delivered nothing.

They have every reason to want to get Bulldozer out as soon as possible and preferably before Ivy Bridge. Each generation that Intel releases that they don't have a response for just puts Intel that much farther ahead.

Now that said, Intel may well have decided to hold Ivy Bridge if AMD can't deliver Bulldozer because they don't need to. Sandy Bridge CPUs are just amazing performers, they don't need anything better on the market right now. However I can't imagine AMD colluding with Intel on this. They are not in a good situation.

I hope AMD can get back on their feet soon, and hopefully will some day in the future offer some real competition to Intel. It would be generally good for consumers by lowering prices and pushing both companies to keep on innovating. But it doesn't look likely:/

I've always liked AMD's products and the whole underdog-fighting-for-its-life is easy to symphatize with, it's just sad to see they don't ever seem to be able to properly compete with Intel, there's always something wrong or amiss.

I think the Pentium 4 days really f'd over AMD in the OEM space, though since the Core/Core2 Intel's held the crown, though for most use AMD's E-350 is a really nice offering. I think Bulldozer will have advantages in the low-mid end, where Intel may keep the raw cpu speed crown. I'm more interested in seeing an NVidia Tegra 3 in laptop/desktop options myself. I think we've gotten so used to the bigger, better cycle that we've lost sight that 5+ year old tech is more than fast enough for anything most pe

I have to say that so far I'm impressed with the E-350 in my new not quite netbook. For where it sits with price/performance/battery life there weren't any portables with Intel solutions I could consider. I don't have the patience to put up with the Atom anymore and their larger budget processors are assembled in an indecipherable mess of product lines and model numbers.Intel's problem that everyone has been sounding the alarm on is that in the coming years being the x86 people with court mandated competiti

I don't believe the bit about 32nm is accurate. I just ordered a new laptop with an AMD A6-3400 CPU. This is a fusion based chip and is 32nm.

As far as the performance claims regarding 6 core AMD chips, I have to agree with that. However, the cost of an Intel chip is not worth it. My 6 core AMD upgrade saved me hundreds of dollars. it still improved my starcraft 2 framerate by double over my phenom 9600 x4.

Intel stuff is faster if you have the money. It's not fanboyism, just practical price/performance based on benchmarks.

I bought a nice 6 core Phenom X6 x1035T. It is underclocked to only 2.6 ghz, but for $450 I got 8 gigs of ram and virtualization to run VMWare with its SSD instructions. With the 6 cores and 8 gigs of ram it rocks to have 3 - 4 VMS running for the price I paid.

With an ATI 5750 that came with it, games run reasonable well too. As soon as I upgrade the PSU I plan to flash the bios so I can clock my cpu to 3.2 ghs. Asus crippled but there are hacks to get around it.

That's a $20 difference and BTW the i5 blows away the Phenom (any Phenom). You don't even need an i7.

Intel is able to price their cpus at a bit of a premium over AMD, which is why Intel is rolling in money and AMD is not. But there's a good reason why Intel has that pricing power and its one word: "SandyBridge".

It is also true that the absolute highest-end unlocked Intel cpu is priced at a very serious premium... but if you are try

You also have to consider a few other things, integrated graphics options, especially in laptps, and/or motherboard options. This is where the pricing really favors AMD. For my mom, father, brother, sister, and grand, others an E-350 based system is sufficient, and other integrated gpu options from amd all exceed intel. This is why I've gone about half and half on even recent builds often in favor of amd. Total system cost for a non-suck system (sub-$750) really favors AMD. I think where AMD needs to f

I've recently switched from all-nvidia to AMD GPUs and was pleasantly surprised when the drivers horror story just didn't happen. Aparently the monthly release cycle did wonders for them, both on Windows and Linux.

Let me be clear, when I was talking about price, I was including the fact that I upgraded an existing system with a 6 core CPU rather than buying new RAM + motherboard + CPU to switch to an Intel chip.

There are workloads that AMD chips beat Intel chips. The benchmark mentioned by the other poster is an example. One benchmark does not prove anything, but I'm certainly happy with my purchase.

What I like most about the AMD CPU is that it's great for building packages. I use it occasionally on the MidnightBS

for something more useful than Starcraft. I agree you don't need 6 cores and 4G ram to read your e-mail, but today the workload of the average server or desktop, includes running virtual machines, virus scanners, full encryption, flash websites and whatnot. The laptop I was "given" 2 months ago has a brand new 4 core Intel, 4G ram, Nvidia quadro GFX and it's too slow to run my normal workload of terms, browser and VMs. Given the fact that I'm a contractor, spending a little extra on a faster CPU would proba

For servers, almost everything is running on VMs now. More power per CPU is very welcome there, since you can run faster/more VMs per box. The less heat you produce, the more servers you can put in a data center. Given the cost for real estate at prime interconnect sites, it's profitable to go green, even if you're not a tree hugging hippie.

Hm, actually, if you're going to have a pile of heavy-duty VMs running concurrently, a higher number of slightly less powerful cores are going to be much better than a

If you think you need more CPU for desktop than for gaming, you are doing something seriously wrong, no matter how many VMs you use. Seriously, check the hardware requirements for, say, Starcraft 2. It totally owned my machine before I last upgraded. The same machine that practically flies for development, VMs and computational fluid dynamics. Yes, it uses on-access virus scanning and W7.

4G ram

There's your problem. VMs are big. Swapping to hard disk is slow. More CPU won't help. You need more RAM.

It depends on the games you are looking at. Most of the first person shooters are highly GPU focused, with very little AI involved. Much of this is due to this idea of releasing the same game for consoles as well as PCs, they go for the lowest common denominator, and that generally means low end CPU and even the graphics tend to avoid being cutting edge to make the PC version almost identical to the console version.

If you're running numerous VMs, RAM is your problem. Given enough RAM, a core 2 duo will run several VMs in a test environment just fine. With 4GB ram, you can throw the fastest CPU at it you like; if you start running into swap, you're fucked.

In fact, that goes for almost every non-gaming task you'd use a box for these days, other than transcoding video and a few other CPU bound niche tasks.

The only problem is that most of the benchmarks use the Intel compiler and thus are completely worthless. it would be like basing the GPU results on Quack.exe, remember that? hell see for yourself. Run the benchmarks, change the CPU-ID to read "Genuine Intel" and run them again...gasp! Your CPU just jumped up in performance by over 30%....amazing!

People don't realize how BIG a chunk that compiler rigging takes, its not just pre-crippled, its tying a fucking boat anchor to the code, we are talking losing ALL of the SSE instructions sets, going back to 2002 and leaving nothing but early 90s X87 instructions. How long as it been since the x87? 1994?

So as I point out every chance I get run with a fair benchmark and then we'll talk. Compile it with the GNU compiler, we know that is agnostic, THEN run the benches. Fair is fair folks.

People don't really care about clock for clock these days.. The kicker is in energy efficiency and cost for performance.AMD do reasonably well in the energy efficiency and really well in the "Bang for Bucks" department. Yep, Intel currently outstrip them on the high end, but AMD have a lot of the mid to low range market, and still have a good showing in the server market. I wouldn't exactly call that 'getting slaughtered'..Still, as you say, they do need to get newer architectures out the door to keep bei

To an extent yes, but Atom sucks, I mean seriously, Intel ought to have been too embarrassed to let that dog see the light of day. And yes, the Intel offerings do offer better battery life, but at a cost, the only ones I looked at were several hundred dollars more. Battery life is great, but with that much extra on the price tag you might as well just buy a couple extra batteries.

The Atom was a quick and dirty way for Intel to get into the tablet and netbook business as that market was taking off. It's only reason for existence is that it is an x86 processor, and thus can run Windows. Except to see sales drop off rapidly when Windows 8 comes out with ARM support.

The trouble with MSWindows 8 on ARM (if you even consider MS an option) is software from other companies. It won't run. It won't even install. (That's a prediction, not tested. But a reasonable bet.)

I don't expect MSWindows8 on ARM to have ANY effect on desktop sales, even at the extreme low end. It may be fine for phones, where there *aren't* any legacy problems. (Not the way I'd bet, but a possibility.) But I don't see any possibility for it on a general purpose computer. Going to run an X86 emula

Well, the article basically said that there is no reason to believe that Bulldozer is delayed at all. I dunno why the title reads "Intel and AMD may both delay"

... wait. Yes, I do. To get readers.

From the article:

The CPU's anticipated launch date is already close enough that the company should already know if it can launch the product or not; waiting until now to announce a delay isn't something Wall Street would take kindly. Moreover, AMD has been fairly transparent about its launch dates and delays ever since the badly botched launch of the original K10-based Phenom processor back in 2007. Llano has been shipping for revenue for several months, and we're not aware of any 32nm production troubles at GlobalFoundries.

On top of that, Intel is a node ahead in terms of fabrication. All Sandy Bridge chips, and many older ones, are on 32nm. AMD is 45nm at best currently.

No, the Llano chips are shipping and 32nm SOI. However only the low power Bobcat cores are made on that process, they need the high power Bulldozer cores to compete with Intel on performance. But yes, AMD ships very many 45nm chips still.

Then why do I only buy AMD (and ARM) these days? Frankly, Intel just seems to fib about their power envelope every generation and I do not, repeat, do not like to be surrounded by noisy computers. Currently running a quietized 4 way Phenom II box, very happy with it. I have not been happy with any intel box as a workstation for quite some time. Nothing beats the Pentium M in my aging Shuttle for a basically silent server (21 db @ 3 meters). Every other Intel box I have run recently requires stupid amounts o

Then why do I only buy AMD (and ARM) these days? Frankly, Intel just seems to fib about their power envelope every generation and I do not, repeat, do not like to be surrounded by noisy computers. Currently running a quietized 4 way Phenom II box, very happy with it.

You'd have been happier with an i3 or i5. I can just hear the fans on my i5 server when I stand with my ears a few inches away from it.

Not at the same price point. For what he'd spend on the i3/i5 he could probably have water cooling or something else that is rediculous. That is the thing these kinds of comparisons always leave out - cost. I've been running AMD systems for a while now - I can upgrade a box every two years when buying Intel would mean I'd be upgrading them every four years. While in years 1-2 the Intel system would be somewhat faster, in years 3-4 the AMD system would be miles ahead. Plus I'm sinking less money into wh

You'd have been happier with an i3 or i5. I can just hear the fans on my i5 server when I stand with my ears a few inches away from it.

Both Intel and AMD give the TDP of their four core parts (i5 for Intel, Phenom II for AMD) as 95 watts. The difference is that I believe AMD. And I wonder about your belief that an i5 box would be quieter than mine with similar components, or your definition of what a few inches is, or whether you have the stereo on when you post to Slashdot. I have had people tell me with great assurance than an XBox 360 is quieter than a PS3, or that they can't hear their PS3 when watching a movie, both patently absurd wi

It's the desktop and server CPU market though. Anything else in those areas are just a tiny niche.

To be fair, ARM is likely to eat into the low end of that market over the next few years. My Atom-based server/DVR was fast enough until we got OTA HD here and it became too slow for transcoding, and the Ion Xbmc box is plenty fast enough for video playback or general desktop usage; my i5 laptop spends most of its time at 1.2GHz, where it's probably not much faster than an Atom.

So if ARM can produce a chip at least that fast (if they haven't already) I think there's a chunk of the x86 market waiting for the

There are Nvidia Tegra 2s being sold clocked at 1.2 GHz (dual core) right now. The Tegra 3 line will be quad core 1.5 GHz with 1.5 GB RAM. With Win8 supporting ARM, I can easily see ARM netbooks/laptops becoming commonplace within the next few years.

High-end RISC/mainframe platforms make up ~35-40% of the server market (source: both IDC and Gartner's numbers for Q1 and Q2) by revenue. High-end UNIX is staying flat and mainframe use is hugely increasing. You have no fucking idea what you're talking about.

No offense, but I'm typing this on the $350 15" Acer laptop with an E-350 (zacate fusion processor). I played portal 2, start to finish, on this thing at medium settings. It gets about 6 hours of battery life out of light web browsing. Intel may be killing AMD on the low end, but based on my comparison to a $600 HP probook with an i3-2ksomething, it's indistinguishable at web browsing and word processing, and the i3 just fails any time you try to run a game.

You have a point, but you are comparing a business notebook at the inflated suggested retail price (big companies get big discounts) to a consumer grade machine. Try opening the laptop 2000 times, type 1000 hours on it and see which one is still more or less functional. A better comparison would be an HP pavilion. You can get something like a DM4 for $450 on the HP website right now, that compares to your Acer in specs and also has the AMD chipset. The cheapest I3 pavilion is another $100 more expensive. Th

What are you talking about? Curren Opterons have a 2 chip MCM module (see the AMD fanboys squirm over that after they denigrated Intel for doing it first). Each chip in the module has a 2-channel memory controller that isn't any different from the ones on the desktop Phenoms. Manwhile, Intel has been selling single chip solutions with 4 memory channels since 2009, with lower end models having 3 channels (Intel also had a "real" 8 core CPU out before Bulldozer despite AMD's marketing claims that they inve

The high end Xeon chips had THREE memory channels according to Intel right up till the last batch released Q2 of 2011, did they lie or did you mean for over 3 weeks (rather than years)? Note that the Westmere runs in the mid-2 GHz speed range and at 130Watts.

Personally, I don't care if there's one or 2 dies in the package for the 8 and 12 core chips (though if the workload is memory bount, the 12 core shouldn't be used), it doesn't seem to matter much since there will be 2 or 4 chips on the board anyway. F

"The current situation is Intel is slaughtering AMD"Frankly in the consumer space ARM is slaughtering Intel. The truth is that to day 90 of all desktops have more than enough CPU power. The most intensive thing most computers do today is playback HD video. Sure the I7 SandyBridge is blindingly fast but most people don't need the speed or the price tag. The new A8 and I3s show where the future is going. Fast enough with good enough graphics and low price.

Basically you are right. AMD has nothing even remotely close to SandyBridge and Bulldozer won't get them there either. I've been a long-time AMD fan, and over the years AMD has saved me bundles of money with their socket compatibility.

But AMD has to make a socket switch now and there are way too few AM3+ mobos available. Not only that but the mobos that are available are wired for compatibility.. they will work with AM3+ cpus but they won't be able to make use of all the new performance capabilities. So

The general tradeoff between Intel and AMD is that AMD optimizes its instructions to run fast from cache and poorly when cache misses occur. Intel optimizes its instructions to run fast (as is possible) when cache misses occur and to run modestly otherwise. That's the best way I can describe it.

For a long time AMD was able to compensate by placing larger caches on the cpu die, and Intel didn't care and generally had smaller caches. That changed with core 2 duo and later chips (particularly SandyBridge).

Now that said, Intel may well have decided to hold Ivy Bridge if AMD can't deliver Bulldozer because they don't need to. Sandy Bridge CPUs are just amazing performers, they don't need anything better on the market right now. However I can't imagine AMD colluding with Intel on this. They are not in a good situation.

Perhaps it's Intel wanting to keep AMD alive for anti-trust reasons. They could very well continue to slaughter AMD, but is it in the best interests of Intel? If AMD dies, then Intel's going to ge

45 nm is correct for the AMD Phenom II series. But the "Llano" APUs for low-end desktops are already in 32 nm. So you could say Intel is half a step ahead right now. Overall, however, I agree that AMD is under pressure and cannot afford artificial delays in their products.

What they still have are some niches where Intel has slacked off or does not compete for other reasons. The most important one right now are the APUs. AMD's Brazos platform does well on netbooks, and IMHO the LLano is a good choice for che

HP, Asus, MSi, and Lenovo have all adopted the E-350 over the Atom alternatives in notebooks and low end laptops.

Remember that Notebooks and Laptops are replacing desktops in the typical home. Intel is probably pretty worried that they have absolutely no competitor to the E-350 that doesnt both cost significantly more and draw significantly more power.

Llamo, on the otherhand has much better graphics compared to an Intel atom.. to hell even a full speed i5! Bulldozer will have even better graphics. If you just run IE 9/10, flash, and Office the AMD llamo and bulldozer will seem faster and less choppy. They can also run games like World of Warcraft as well. Sub notebooks sucks and can do these things and run these games.

Llano has much better graphics than Intel's offerings, too good in fact. They stuffed in far more performance than has ever been seen on an embedded GPU, and bottlenecked it severely on the CPU's memory bus. It can't run full speed. It can't even run half speed without running into bandwidth limits. There is good reason why its discrete brethren are reaching into the triple digits. All they're doing is sucking down more power and real estate on silicon that can't be properly used.

It can share fine if it uses the same dedicated memory controller without doing an interrupt to the CPU or chipset each time it needs to access ram. That is what crippled the other integrated chipsets.

No it's not. Older integrated GPUs were crippled by low core performance, not memory bandwidth (though, to be fair, they couldn't have high core performance because the memory bandwidth was so low).

Modern CPUs want a lot of memory bandwidth. Modern GPUs want a staggering amount of memory bandwidth (the GTX580, for example, has around 200 gigabytes per second and even the 8600GTS has 32 gigabytes per second whereas a dual-channel DD3-1600 system only has 25 gigabytes per second). Stick both of those on one c

Ivy Bridge can't compete with ARM... but similarly, ARM can't compete with Ivy Bridge either. IB will have its biggest advantage in thinner-lighter notebooks (Macbook Air & the new Ultrabooks being put out by lots of different vendors). The projected power envelope is about 17 watts which is much much higher than even the tablet-level ARM chips. At the same time, its performance will destroy anything that ARM will have in the next 5 years (the newest ARM chips that are coming out next year are just b

I think this was true ~8 months ago but Intel mobos are priced about the same as AMD mobos these days... really ever since SandyBridge came out. There is so much chip integration now that the only real differentiation between mobos is added features and BIOS software.

Most of the costs involved in building a gaming system are unrelated to the cpu. I don't count built-in graphics as being decent, though you might, and I've fried enough systems with cheap PSUs that I don't buy cheap PSUs any more. So my con

Agreed. The APU is great for the "regular" system, but they have nothing for server market anymore.

It depends what you mean by server. The twelve core AMD chips beat anything Intel can sell you if you have tasks that are going to use as many cores as they can get. Stick four of them in a SuperMicro board with a bit of memory and you have something that will outperform a far more expensive four socket Intel Xeon based machine.

Possibly, but realistically most people would do fine with AMD's Fusion core processors, the ones they've already released. Tthere are legitimate reasons to have more power, but for the things that people typically do, it's more than enough power.

The most naive question to ask if is this sort of delay is relevant to Moore's law and similar patterns. There are a variety of different forms of Moore's law. We've seem an apparent slowdown in the increase in clockspeed http://www.tomshardware.com/reviews/mother-cpu-charts-2005,1175.html [tomshardware.com]. The original version of Moore's Law was about the number of transistors on a single integrated circuit and that's slowed down also. A lot of these metrics have slowed down.

But this isn't an example of that phenomenon. This appears to be due more to the usual economic hiccups and the lack of desire to release new chips during an economic downturn (although TFA does note that this is a change in strategy for Intel's normal approach to recessions.) This is not by itself a useful data point, so this is not further need to panic.

On a related note there's been a lot of improvement in the last few years simply by making algorithms more efficient. As was discussed on Slashdot last December http://science.slashdot.org/story/10/12/24/2327246/Progress-In-Algorithms-Beats-Moores-Law [slashdot.org] by a variety of benchmarks linear programming has become 40 million times more efficient in the last fifteen years and that only a factor 1000 or so is due to the better machines, with a factor of about 40,000 attributable to better algorithms. So even if Moore's law is toast, the rate of effective progress is still very high. Overall, I'm not worried.

most naive question to ask if is this sort of delay is relevant to Moore's law

Does it have to be? Moore doesn't work at Intel anymore so they might have a new plan. It was going to have to hit a physical limit at some point anyway which Moore would have very clearly known when he proposed it in the first place.

I will continue to buy AMD. ive compared my sub $500 AMD rigs with comparable Intel rigs, I don't see why spending 2 to 4 times the amount of money for intel over AMD when AMD does a fine job.
My Phenom II 945 has served me well, runs cool, runs fast, everything I put on it it takes like a champ. I have yet to stress out the Phenom.
Ive run multiple games on it, audio and video work on it. The only 'advanced' thing I haven't done on it is CAD and seti@home.
Why spend 2 or 3 times for the Intel, when all i'

I don't buy Intel necessarily for the CPU, I buy Intel for the supporting chipsets. Intel chipsets in most instances are rock solid....with excellent driver support for both Windows and Linux. That being said, I'm glad AMD exists....if AMD didn't exist, it would be necessary for Intel to create one;)

I have to wonder how much of this is due to the stagnating economy in much of the developed nations. My recollection is that the last time the economy went south, all sorts of projects were either postponed, put on hold, or simply ended.

Processor technology is at a state of gaining more by reducing die size than design. Because of SMP, sophisticated design changes are not needed to gain performance. Intel knows they can seriously move ahead of AMD if the next processor is successfully reduced to 22nm. They might as well wait. AMD isn't able to drop cash on reducing die size on every other release. If they can put off releasing their next processor at a point when they can afford smaller die production they will get a lot more out of i

" would have made it way ahead of the curve predicted by Gordon Moore"

They are ahead of the curve. Intel is postponing because of their recent trigate tech. Intel could still release 22nm right now if they wanted, but it wouldn't be worth it with such a huge difference in power leakage.

Now, if you want to look at processing power, look at GPUs. They're beyond doubling every 18 months already and are expected to approach 2.5x-3x/18months in the next 2-3 years.

I was surprised that the original date when it was planned to be released back than would have made it way ahead of the curve predicted by Gordon Moore. I was saying that it should be delayed some time so it actually is more accurate to the prediction and it has been delayed, however I will never know whether it had been done for that reason.

So... If I'm reading you correctly... the reason that they should have delayed the hardware wasn't because of something like it being too expensive to produce for expected market, too difficult to produce in sufficient yields, or any other technical or business reasons that might exist, but because the number of transistors involved didn't match up to a prediction made 30-some-odd years ago?

You realize that prediction has only "come true" when you average the graph over a very long period, and there are significant statistical outliers (that represent significantly successful chips in their day) along that plot?

A quick google would answer your own questions. K5 was released (was a competitor to the Intel Pentium & IBM/Cyrix 586), the K6 was up against PII and suffered from major thermal problems. While it did give Intel a little bit of a worry as far as sales go, the K6/K6 II's weren't exactly powerful. It wasn't until the Athlon (K7) that Intel shat themselves.

Yeah but there was a couple of dirty little secrets with the early Athlons. Number one thanks to NO thermal monitoring so they burned up, they burned up a hell of a lot and took boards with them. We used to keep a dead chip bucket at my old shop and that thing was damned near full of nothing but early Athlons. Number two it was trivially easy to rip people off by OCing an Athlon and selling it as a faster chip. Those early Athlons really didn't give you enough info either in the BIOS nor in the OS to really

Oh but you could OC the living shit out of the Celery back then! With a good air cooler and a little luck a Celery could be OCed by a good 30% to 35%. Man in those days there really was a reason to replace your gear every 3 years or even less, the innovations on both sides of the aisle were just like two heavyweights duking it out....then Intel bribed all the OEMs and won through deceit what they couldn't when fairly, boo hiss.

Meh the big story as far as AMD goes isn't gonna be BD, frankly IMNSHO as a PC bu

yeah my k6-2 which was stock 300mhz would only go to a bit under 400mhz oc'd, no amount of cooling helped, and the 300mhz celerons about that time did 450mhz fairly regularly, after that I went with a k7, and a duron then.

been rocking on intels for few years lately though.. because i've been content with work provided workstation laptops. they're beasts compared to what used to be usual, can even play new games well.

Not to mention speeds really haven't gone up as it used to be (not to mention clock speeds, but yeah the MHz is a myth)

Depends how you measure. I've recently gone from a Q6600 Core2 Quad, to an i7 2720 in my new macbook pro. Yes, desktop, to mobile CPU.

The i7 kicks the living shit out of my Core2 in transcoding video. Sure its a bit of a niche task, but CPUs have in general been "fast enough" for the day to day non-niche desktop crap since the original pentium, really.

The K6 and previous had a pretty crappy FPU. And in the days of the Quake 1 and 2 engines, where 3d now was not in use yet, they made for a pretty shitty gameplay experience. No games? They were fine. But if you were in any way into 3d, they were crap.

any AM3+ motherboard will be able to support the next generation of CPUs alone is a good enough win in my book. As soon as the box is delivered I have to have have my computer upgraded in under 10 minutes

FTFY

No, but seriously, we went from AM2 to AM2+ to AM3 to AM3+, so compatibility is not that great. I just got my AM3 box last month (because going Intel would cost me about $100 more and perform a tad slower in Premiere) and I don't know if I'll be able to upgrade to Bulldozer since not all AM3 boards will work with AM3+ CPUs. An AM2+ Phenom II X4 920 owner will certainly have to buy a new mobo. Sure, it's a bit better than what Intel does, but you can only go one generation further and, frankly, I don't t

We're simply looking at an increasing gap in the demand for CPU power.

For most people (documents, spreadsheets, email, browsing) an i5 is more than powerful enough to satisfy the current demand.OTOH there are specialized areas (visual effects, simulation,...) where even the current top-of-the-line models of Sandy Bridge Xeons reach their limit way too easily.