11.12.2009

After decades of market wrangling and years of legal complaints, AMD and Intel have settled on their long standing dispute of Intel's alleged anti-competitive practices against AMD. In accordance with the settlement, all patent disputes are resolved and AMD will withdraw all legal complaints worldwide. In addition to this, the cross-license agreement has been extended for another 5 years. In the aftermath of this historic settlement between the two largest chip-makers, there will likely be many who cry foul or vindication and the flames on forum's will reach far and wide.

We don't know the exact reason's why Intel settled now, or what the long term ramifications will be for either company, but suffice it to say, this long and storied chapter is now closed. AMD may have come away with a victory, but what about consumers and OEM's, did they gain anything from this? How about the semi-conductor industry?

The legal battles and anti-trust complaints stem from the notion that Intel is a "monopoly" and has used its position to push AMD out of the market and in the process hurt AMD and consumers. The specific complaints are against the use of discounts or rebates that Intel would pay to OEM's if they agreed to use Intel chips exclusively or at some percentage of volume. The logic goes, that Intel was able to use a dominant position and essentially "pay" companies not to use AMD chips. This is claimed to not only directly hurt AMD, but the entire market. As a result, OEM's and consumer's now have "less choice" and this will ultimately result in less competition.

Not only is this analysis short-sighted, it is missing the bigger picture of the market as a whole and will ultimately result in hurting the semi-conductor market and consumers. Despite the fact that AMD actually sold everything it made during the time Intel was allegedly committing anti-competitive behavior, there is very little evidence that consumers were ever unable to buy AMD chips. The fact that Intel was able to offer rebates and discounts to OEM's and remain incredibly profitable only speak to the efficiencies of Intel's capital structure which resulted in lower prices for OEM's and consumer's. The fact that AMD was unable to offer discounts and under bid Intel for business shows that the problem is not with Intel being Anti-competitive, but AMD being uncompetitive in the market. AMD was unable to undercut Intel's prices because they had a less efficient capital structure and were unable to profitably sell chips at those level's. In a free market, the incentive is for AMD to improve efficiencies and eliminate waste such that they can compete.

Next, Is Intel really a monopoly? It is important that we distinguish between a legal monopoly and an economic monopoly. A legal monopoly is one that is granted by the state which allows a company to be a single supplier of a good at the expense of all others (i.e. utility companies etc.). An economic monopoly occurs when a company emerges as a single source supplier by being the most efficient (poorly run companies went bankrupt) or consumer preference chose a single entity in the market. Is Intel a legal monopoly? Yes and No. The patent system is a de facto monopoly system where a company is granted a legal monopoly over an idea. Intel is technically a legal monopoly on the x86 IP and other things like chipset busses and other misc items. When Nvidia complains about Intel refusing to grant them a bus license for QPI, the argument for anti-competitive behavior has some merit. However, in the case of Intel vs AMD, the government has brokered a cross-licensing agreement for all IP and they are essentially in a quasi-free market situation. So in this respect, Intel is NOT a legal monopoly. However, Intel is arguably an economic monopoly which is why the government steps in.

Ironically, The government has very little problem with anti-competitive behavior when Intel prevents Nvidia from creating Nehalem chipsets because they were complicit in the arrangement of the monopoly and license. However, it appears they do have a problem with Intel becoming an economic monopoly because that was chosen by the market, ergo, the government didn't have any control in the situation, and therefore, must punish everyone involved.

Finally, the least realized aspect of harm is the fact that billions of dollars in capital have essentially been flushed down the toilet. How you may ask? Through a combination of fines from the EU and this settlement, over $2.5B in capital have been moved from wealth generating activities to wealth destroying activities. In a free market that is removed from coercion and government intervention, a profit occurs when you create a product that is worth more than the sum of all resources put into creating it. When this occurs, your reward for using resources effectively is a profit. On the other hand, when you are wasteful and inefficient and create a product that is worth less than the sum of all resources put into it, your punishment is a loss. In this respect, we have moved over $2.5B from one of the largest wealth generating companies in the world and blown it on boondoggles and social programs in the EU and given a handout to AMD which is nothing more than corporate welfare for a company that has a record of destroying wealth to the tune of ($7.2B) in retained earnings over the course of its lifetime.

To sum this post up, has the market benefited from the anti-trust activities against Intel. Most likely not, although AMD could surpise us with a remarkable turn around, but unfortunately that is still speculation. On the other hand, what we do know is that Intel has suffered much harm in all of this at the hand of the legal system so as of now, we can only say that a net harm has occured in the market and it's unknown how that will affect consumer's in the future.

by
Khorgano

138 comments:

I wonder how this affects the NY AG's lawsuit? I don't know if they were planning to call any AMD execs or personnel to testify, or any other industry execs. Would there be any incentive to cooperate or to try and stick it to Intel when they've settled?

I can see where Intel considered it to be in its best interests to settle, even for such a big price tag. This might pull the rug out from under the Cuomo lawsuit and also keep the feds from joining the parade. The money will help AMD in the short term, but they're still in bad shape if they can't reverse their losses.

One question- what do you guys think the long term effects for AMD and GF? As I understand it, GF will be able to separate into its own entity if it wants to or has to. Will this help AMD, or does it make any difference based on their future prospects?

I wonder how this affects the NY AG's lawsuit? I don't know if they were planning to call any AMD execs or personnel to testify, or any other industry execs. Would there be any incentive to cooperate or to try and stick it to Intel when they've settled?

I would assume that Cuomo will pursue a conviction in NY. To him, Intel settling with AMD will imply tacit guilt or wrong-doing, so he has really nothing to lose other than stroking his ego and furthering his political career. The fed's will probably end it though since AMD is withdrawing the US federal level anti-trust suit.

Anandtech has an interesting take on the NY AG lawsuit - it'll be difficult for Cuomo to succeed without AMD's cooperation, which obviously would no longer be forthcoming under this settlement. Sorta like a trial where the plaintiff fails to show up :).

Hmmm... cross license reneg and lawsuit settlement combined and settled... who could have predicted this might happen...

....about a year ago!

;)

It'll be interesting - obviously details will be sealed but expect a bunch of class action lawsuits for money grabs by ambulance chasers - not sure if this stays sealed; though I think wrapping the patent stuff/license reneg into it is probably enough to keep it sealed.

This is a win for both companies - AMD really needed the cash and couldn't roll the dice on losing the x86 license - they were skirting on the edge with the GF acquistion of SMIC acquisition and capital calls coming up for the NY fab buildout (which would dilute AMD's equity stake probably below 30%). They probably were going to lose the "it's still a subsidiary that we control(wink, wink)" card.

Intel gets what they need - predictability and risk aversion. Intel made more in the first 9 months to pay off both this and tehe EU fine, with a Bil left over. Basically, this costs them 6-9months of profits, and it will likely leak out that AMD is now able to outsource manufacturing as part of the x86 deal and look for AMD to dump GF sales fatser than you can say "Spansion". And my thanks to the US regulators for completing one of the largest technology transfers to the middle east we will see in our lifetimes with barely a casual look at it.

My guess is Coumo continues - he's running for his next job and this is a large pelt he'd like to put on his wall. Factor in that some of the first few cases involving WallSt folks already have been found not guilty - and it's going to be hard for him to claim victories in that area.

MAY 7. 2008:Keep in mind AMD can only outsource 20% of their CPU production (by the terms of the x86 license) - so the only way AMD truly could have increased capacity further was through fabs - this is really the hole in AMD's case - if the market was more 'open' could they really have sold more chips?

I still see this case settling as the most likely outcome - AMD can't afford to wait until the case is judged and then the eventual appeal should they win. Their highest leverage is now (well actually it was probably about a year ago). 07 May 2008 20:09

I'm going to chime in here. I work for intel. And I came to the same conclusion as some of you did... where there is smoke there is fire, and the $1.25bil payout to AMD puts out a big bellow of nasty black smoke.

Until I listened to Andy Bryant, who said just about the same thing. But then he said this -- at least three external law firms reviewed the evidence in the various cases and came to the conclusion that, based on the facts, intel is in the clear.

However, based on the perception AND the fact that this would be decided by a jury, the risk was too high to go to trial, and all three law firms recommended a settlement.

Still, morale for this particular intel engineer is a bit low. I do not like the feeling that I am working for a company that operates in shades of grey. I know all the engineers I deal with and I know that their ethical standards are high. I know nothing of the marketing guys, but woudl love to believe the same.

As all of you said, this gives stability going forward. If that's worth 1.25 billion, then so be it.

Nonsence. Your design and process inovations are evident the world over. Currently, every other FAB/manufacturer is falling on their collective asses, whereas you guys blew though 45nM like crap through a goose. You are ramping 32nM and you make it look like a walk in the park, lower power, smaller, faster, with each successive generation.

Perhaps they all could sue INTC to show them how to impliment Hi-K, Fin-Fet, or Tri Gate?

"The Imitator"

I have no problem giving up A YEARS dividends to pay off those BLOODSUCKING LEECHES. I'm glad it's over, and now that it's done, just crush them. Send them down to the VIA gulag.

Just keep that nice fat 6-Core 32nm XE warmed up for OLE Sparks. I want the fastest processor on the planet. I know where to get it, and who's gonna make it.

I'm surpised, AMD must have not been so sure as to settle after all that retoric from Hector the wrecker and for only 1.25 billion. What a loss for AMD.

I'm not sure what they got beside a bit of trump change. Yeah it helps there debt, but really the cost for developing a new technology is in the billions and a new fab in the mutiple bilions that 1.2 now does nothing for them.

They are years behind on process performance, years behind on yield learning and mabye a bit more then a year behind on lead node introduction. That little bit of money will do nothing to help them close the gap that has been only growing since the 90nm days.

On the surface the settlement may appear to be admission of guilt by the uniformed, but if AMD thought it had such a sure case they really should have gone trail. I civil case infront of a jury of people with an IQ of less then a 100 would have result in a settlement far larger.

Only conclusion, AMD's case was weaker then we thought and likely their financial position and x86 manufacturing licensing issue was far weaker then any of us on the outsdie can see.

The obvious conclusion for me is that AMD must be really hurting much more then even I realized to have settled for such a small sum, or had no case at all LOL.

To look at this from a slightly different perspective, consider that any award in a court would have been for triple damages.

That puts AMD's value of the case at about $400M. If they had more confidence they would win a bigger settlement, they would have taken it to court. But just like Intel, they weren't sure enough how a jury would rule to roll the dice.

Guru, while you did call it, the odds were stacked in your favor. It is the rare corporate case, indeed, that actually sees its day in court. :)

I'm going to chime in here. I work for intel. And I came to the same conclusion as some of you did... where there is smoke there is fire, and the $1.25bil payout to AMD puts out a big bellow of nasty black smoke.

I hope you can find comfort in the fact that Intel never actually did anything wrong. They may have done something "illegal", but that is only because government intervention and the laws are so screwed up. This is the system we have, and that's the game Intel has to play. If shelling out over $2B is what it takes to keep the blood suckers off you, then I guess Intel did all right.

I know there aren't many people who share my convictions on true free market principles, (even those who claim to be capitalists), but I hope to atleast open the dialogue and get people to question the status quo. In this instance, it has been a witch hunt against Intel, and as Roborat said, if they only have to sacrifice ~1 quarter's profits to wipe the slate clean, so be it.

Also, Moose, you're right, if AMD can no longer participate in the legal proceedings in NY state, it's probably not likely that anything will come of it unless Cuomo is hell bent on pursuing a dog and pony show at tax payer expense.

Gov Patterson forecasts NY State will be bankrupt by Christmas... I mean what the hell? Basically, the state is so screwed up they only saw this coming 4 -5 weeks in advance? Could you see the meeting: "Oh by the way, governor we're kind of running out of cash and may be tapped out in a month. So we're proposing to cut education and hospital spending (but keep funding Cuomo and the AG office!)"

The point of this off topic ramble? Pretty sure the money grab, er...anti-trust case, against Intel will continue - where else is NY going to get money? With the hammer taken to the big financial firms, NY is going to see a massive hit in corporate tax revenue, not to mention the income tax loss from all of the lost jobs in that area.

I wonder if a 1.2Bil transfer of NY state's money to Dubai over the next few years will help or hurt the whole money situation in NY? (shakes head) 1400 jobs for 1.2Bil... I'm stunned that the state which has exhibited such sound financial judgment is on the verge of bankruptcy!

I think AMD's incentives are pretty obvious- they are running out of money and time. They weren't getting a dime from the EU verdict, they would not have been getting one from any of the other verdicts, and their own lawsuit might drag on for years. Intel was making noise about the cross-patent licenses, and the agreement that they had limited AMD's options in regards to GF.

Even if they thought that they had a good shot at winning a case and winning substantial damages (one does not always lead to the other, after all), they may have had to fold up shop before they got there, and then it would've been too late.

This settlement buys them some time and gives them some money, and right now that was their best option. It doesn't save them, IMO, but it can prolong their life a bit longer. It also removes a primary excuse that gets used to justify Intel's big profits and AMD's losing quarters. No more safety net for AMD.

This settlement buys them some time and gives them some money, and right now that was their best option. It doesn't save them, IMO, but it can prolong their life a bit longer. It also removes a primary excuse that gets used to justify Intel's big profits and AMD's losing quarters. No more safety net for AMD.

This is very true, and with the current cash on hand now, if they can get to break even and manage a few profitable quarters, they will likely be able to meet the terms of the senior notes that will mature in a couple of years, it's still going to be tight and they'll have to manage cash flow well, but it's looking a lot better now. The most important metric for AMD right now is cash flow, and with all the deals they've been doing, that is the only thing that has improved in the last year.

Tangential; AMD has made more money in a single day off of suing Intel than they have in nearly 40 years of selling chips, maybe they should expand their business into ambulance chaser law firm and then really bring in the dough.

The pro-AMD herd over at THG is mooing their contentment and proclaiming Intel dead or BK real soon now :). Especially since AMD has put Bobcat back on their roadmap at the analyst conference, which I think was the day before the settlement announcement, plus Bulldozer in 2011. Seems clear AMD now has the $$ for R&D once again.

BTW, Sci over on UAEZone commented that Robo's blog seems dead - apparently he did not know to click on the comments link to find all 950 comments or so since Robo's last posting :). After he got educated on link-clicking, he made a few disparaging comments about Khorgano, Robo, etc as well as Jumping Jack. He also mentioned being able to turn off his blog moderation, in case anybody wants to leave a few choice remarks about his acumen (or lack thereof) on his own website :).

BTW, Sci over on UAEZone commented that Robo's blog seems dead - apparently he did not know to click on the comments link to find all 950 comments or so since Robo's last posting :). After he got educated on link-clicking, he made a few disparaging comments about Khorgano, Robo, etc as well as Jumping Jack. He also mentioned being able to turn off his blog moderation, in case anybody wants to leave a few choice remarks about his acumen (or lack thereof) on his own website :).

This is fantastic news!!! When presented with facts, data and logic Sci usually responds with disparaging remarks and personal insults. This is his coy way of conceding the point and admitting defeat. We've all seen this before, it's really the highest form of flattery coming from him. I knew he really loved us!

I am very disturbed by the outcome of this, and the EU fine. This is basically a sad example for profitbale and market leading companies as Intel. It just seems that everyone wants a piece this company, next is the state of NY. When Intel is further dragged down into more of this legal mess and starts to suffer losses, everyone will lose.

Boy did INTEL win another one with this settlement. Yeah 1.25 is a lot of money but they make that much profit in a little more then a month.

This chunk of change may provide debt relief for AMD, allow them to fully divest of GF, but does that change anything? Yes the can go on designing chips now without the debt of the fabs, but they got to get them made somewhere.

GF still lags INTEL by a lot both in dimension scaling, manufacturing scale and cost, and most important performance.

AMD may design with the best of them but it is going to alway be behind on performance so they will have to sell for less.

GF on the other hand can't charge too much or they will cause AMD to lose money. How can they find enough busines to fund leading development. Arabs are getting in at the wrong time. Sure Moore's law will cotinue but the huge return days are over. Throwing a billion at R&D and 10s billions at a factory that high end CPUs need is a very poor choice for pissing money away. A fab in middle east provides 5,000 jobs at best, for a 20 billion invest you get a lot more pursuing other business.

AMD goes the way of Via.

Let the ARM versus INTEL battle begun.

AMD story is over. Any Via fanbois want to help them amd fanbois with their grieving.

INTEL haters, would you really want INTEL shackled? If you don't allow them to innovate and make money they will slow down investment, slow down development of chips and just hike their dividend. Sure AMD will catch up. What happened the last time AMD was equal, how did prices go? They went up.

Today we have more value in the CPU then ever before and why? Because INTEl still gets to innvoate. Let the EU and US goverment shackle INTEL and in 20 years you'll wonder why CPU innovation didn't continue on the trend as it did in the last 10 years. You think your car will drive itself in 10 years if US and EU shackels Intel. If they don't what do you think INTEL can bring to the market in 10 years and what you can find in your car / house?

A shackled INTEL is very very bad for the consumer and the AMD fanboi too.

If there are any other "special guests" that may have strolled by this site to take a peak, who feel that the companies armor has been tarnished by the AMD settlement, be advised.

There are a number of sites that are reporting TMSC's 40nM yields are at 50% for a "mature process" thereby giving reason both AMD's and Nvidia's GPU delays.

I'm no process engineer, not by a friggen longshot. However, if half of every 6000A electrical service I installed fell on it's ass, I'd be selling pencils out of a cup in Penn Station every morning. We're talking 40nM here, not 32! It seems to me that 32nM will be a nightmare for THEM!

I'm no industry insider here, but I don't need Braille. I'm willing to wager that every foundry around the world is scared shitless by the way INTC has been executing flawlessly. Make no mistake, all heads are turned your way and that includes every shyster/bloodsucker from here to Taiwan.

Even those Ivy League pompous ass pimps at IBM have been awful quiet as of late. All the tech consortiums around the planet can't gather their balls together on one table to do what you boys have done so well-----alone.

Greatness breeds envy. America is a great nation. We make a big target. We lost a couple of billion on 9-11-01. I lost some friends, many lost family. We had those who doubted America's role, and methods, regarding world affairs. Moral was quite low, especially here in NYC.

I say to hell with them. We do what we do because we must. We cannot by any measure capitulate, lay down, and die. We will go on and continue to be great, regardless of cost. It's who we are and what we do.

We have a new big gun at INTC, Douglas Melamed. He reports directly to Big Paulie. He will help us during this new transitionary phase of the companies operation as we watch AMD slowly fall into obscurity during the next few years.

Orthogonal, not so fast. Although it pains me not to be in agreement with you, all evidence to the contrary. In fact, I too believed our beloved INTC would take a serious hit (your raise and my dividends). No so.

Today INTC has !RAISED! it's dividends to shareholders about 2 cents a share from 14 to nearly 16. Today the company rallied 43 cents a share, directly in the face of this bullshit "settlement." I got a theory and I think it's a pretty good one.

This whole mess is finally behind INTC. Both the damned EU and "the Imitator" have gotten their blood money. We know it, but more importantly Wall Street knows it. This cloud has been casting an ugly shadow over the company since 2005. It's over.

Now that we are in the clear, besides the little cockroaches who will try to seek damages, the company is extremely attractive financially. Couple this with the way you boys are executing, plus a strong product line up, I can't see a better future potential in any other semi. I think long term investors see the same, and Wall Street has spoken. Look for 2 bucks more by the end of the year buddy---------

BTW, Khorgano - from your front-page article: The patent system is a de facto monopoly system where a company is granted a legal monopoly over an idea.

I'd have to say that the US Patent system is a limited monopoly since patents are only in effect 20 years from the date of filing (absent some delays that are the PTO's fault and not the applicant's). It's really more of a quid-pro-quo system where, in exchange for the time-limited monopoly that the inventor can charge license fees, sell the invention, etc., the inventor has to fully disclose the invention in sufficient detal where somebody skilled in that technology could reproduce it from the description.

Since the PTO publishes the granted patents (absent national security concerns of course), then other companies or inventors can then further improve the patented invention and thus drive the advancement of technology.

You can argue that this is a two year cadence, but they have pulled in the last two process nodes and now they are giving all that time back with 32nm. It makes me wonder if 22nm will slip a couple of months as well. That node will have even more challenges than 32nm.

If one technology is pulled in a bit, does that imply the next one is delayed if it is 4 years after the previous generation? 32nm seems like it will be ~4 years after the 65nm node. While 2 years is the target, there will be a few months of noise here and there (which also may not be process technology related). IS 32nm a couple of months late or was 45nm a couple of months early?

On a side note I think Intel is smart going with the integrated parts first and I guess the high end desktop and volume server parts will be closely behind (in contrast to the "giving the customers what they want" approach, also known as the "we have power issues on 65nm and get the clocks up" approach) .

The graphics will not be as good as AMD but again it will be a time to market advantage (like the MCM approach or Atom) and while AMD might eventually come out with a better solution, Intel will have a solid year or more in the market and be pretty far along on a 2nd revision. Again you'll have AMD working and talking about a more elegant solution or a technically better solution, but while they are planning and working on it, Intel will be selling product.

This will give Intel a further foothold in notebook and low end desktop space - contrary to what you read not every product needs DirectX10 (or 11), play the latest FPS games and play Bluray and other 1080p videos. You'll hear similar stuff from the fan community that you heard about atom and netbooks... it can't play 1080p or 3D games... but Intel will sell a $1billion worth in the meantime for a "failed" product, before AMD has a product to compete.

Historically, Intel has had the lions share of graphics. As we've spoken so many times before, a "good enough" solution. This was Wrecktors impetuous for the ATI acquisition a, powerful processor combined with powerful graphics in one package, the essence of 'Fusion.' However, things didn't go as planned.

Naturally, I'm glad you responded because of your profound understanding of process. This is where the rubber meets the road, especially now as the industry gets into smaller nodes.

The random element is process. Case in point, AMD's catastrophic Barcelona failure. It took them an awful long time to get their processors mildly competitive with INTC's. Further, TSMC is having serious issues with their 40nM process, yields at 50%, chamber matching issues, etc. What WILL they do at 32?

In stark contrast, INTC is executing flawlessly. 32nM in Oregon is at least in preproduction, with Orthogonal working his ass off to get the second Oregon facility up and running by the second half of next year.

The question begs, can the world foundries compete with INTC? My crystal ball, while not nearly as big as yours, is telling me INTC's well timed settlement, in conjunction with the relative ease in which they execute, is positioning itself to pull away from the rest of the world foundries. With the legal settlements behind them, I see nothing but smooth sailing ahead with both CPU, and importantly, a serious graphics component where others are failing miserably, at least from a process perspective.

In stark contrast, INTC is executing flawlessly. 32nM in Oregon is at least in preproduction, with Orthogonal working his ass off to get the second Oregon facility up and running by the second half of next year.

This isn't quite correct. If I'm not mistaken Orthogonal is from F32 in Az. The second Oregon facility should be ramping now as indicated here.

Production at D1D will be followed by the D1C fab in Hillsboro in the fourth quarter, followed by high-volume manufacturing at Fab 32 in Chandler, Ariz., and at Fab 11x in Rio Rancho, N.M.

So you won't get a chip from Orthogonal until the middle of next year.

If one technology is pulled in a bit, does that imply the next one is delayed if it is 4 years after the previous generation? 32nm seems like it will be ~4 years after the 65nm node. While 2 years is the target, there will be a few months of noise here and there (which also may not be process technology related). IS 32nm a couple of months late or was 45nm a couple of months early?

I said you could argue the point. :) We all know 45nm was early. However, given Intel's announcement that they were pulling in their 32nm schedule, as quoted here for example, being on time is actually late relative to a "pulled in" schedule. We've given AMD enough grief on this board for moving the goal posts. Intel should be held to the same standard.

Early this year, Intel canceled the 45nm equivalents of the chips they will be launching in Dec/Jan. The company line was 32nm was doing so well there was no need to produce the 45nm design just to replace it so quickly. Now 32nm just launches on time. So either the 45nm parts were slipping, or 32nm slipped after the decision.

I'll freely admit that the issue may not be process related. But, given the lithography challenges (first time use of immersion litho) and general difficulty of dealing with increasingly tighter tolerances, I wouldn't rule it out.

The question begs, can the world foundries compete with INTC?

Sparks, I happen to think that Intel is the best in the world at manufacturing. But as you know, each and every generation is harder than the last. And the road just gets steeper from here as the difficulty of shrinking features is compounded with the need to introduce an ever increase number of new materials into the process flow.

Intel relies on their engineering talent to overcome those difficulties. But as the workload increases Intel is going to have to either hire a lot more people or push the people they have even harder. Intel already has a reputation for pushing their people harder than any other company in the industry. There is a reason there are a lot of ex-Intel employees and despite what Lex may choose to claim to the contrary, many of them left of their own free will. My fear is that at some point this business model will break and Intel will have a problematic process node.

From my observation, Intel is prone to get stuck in a rut until something jolts them out of it. I'm afraid that a crash and burn process node may be coming down the road unless Intel makes some carefully planned cultural changes.

Here's hoping I'm wrong. Who knows, maybe their SOC efforts will drive this cultural change. Because SOCs are certainly not "business as usual" for Intel.

Of 2 year cycles. There are two things that need to happen to launch a process. One the process has to be there, yield, performance, reliability. The second is the design health. Then there is the design silicon interaction. Give or take a few months I think is in the noise when you try to synchronize such a complex process with such a complex design.

To argue about 1-3 months is splitting hairs a bit. Its clear that INTEL continues to march on with their Tick Tock process and design cycle. Its clear that everyone else has fallen further and further behind.

Lex has already told everyone why. And those same reasons is why everyone WONT catch up.

Is it getting harder, yes. Can intel continue to cadence. If anyone can it will be intel. If intel stumbles because of equipment or physical limits you can be sure no one else will be able to do it either. Intel will hit the wall first, and will expand more money and resources then all the other guys combined. So if it is possible they will find a way to do it. As of now, since 130nm or so they have made all lead and everyone else has followed.

Yep, working at F32 in AZ, however, I was in Oregon for a several months this year. It's part of the normal process training/transfer method for engineer's to "seed" the new technology at an HVM site, and yes, we're all working our asses off.

Speaking of foundries, what's the word on AMD selling their share of GF now that they aren't being held to the 20% outsource or 25% ownership clauses in the cross-license with Intel?

Also, I wonder if AMD cooked the books a bit in their Q3 earnings report. I see that, on paper, AMD proper actually made a profit for the first time in 12 or so quarters; but suffered an overall loss due to their owning part of GF. I just wonder if GF sold CPUs to AMD at less than cost, just so AMD could claim a profit...

Well G, I think my crystal ball, while not as big as yours, has served me well. (Hey, even a busted clock is correct twice a day) However, I'll take the time to pat myself on the back! A rare treat indeed!

Looks like Scientia is going to be "testing" an i5-750 vs. a P2-965 (new stepping @ 125W) that he recently bought, if you visit his blog. He pays lots of lip service to "proper testing" techniques, I guess as a jab at Anandtech.

Who wants to bet that the "proper" test apps are those that will favor AMD?? :).

"If that's what you want to bet, then perhaps you guys could lend some constructive advice from this side of the fence?"

OK, but I seriously doubt it would be "constructive advice".

Peh. Shmuck, HE'S going to compare the two chips?

It's been done, and not by Anand exclusively. MaximumPC (if your a REAL hardcore enthusiast) in fact, has done it allready. Further, if you read the article, the little (and may I add 'crippled') Bloomfied kicks the guts out the Pheromone 965.

MAXIMUMPC IS BIASED TO NO ONE AND THEY TAKE NO PRISONERS.

Comparisons? Hell, what are we talking here? AMD's top "enthusiast" chip against a plain Jane, made for the masses, 2 memory channel, bread and butter, platform solution? GMAFB.

Costs perhaps? The aging socket 940 AMD platform is due for an overhaul. (Where will they get those extra traces for future Tri-Channel ---Hmmm?) Given the performance dog house they been in, they've been wise not to lose existing customers with a forced socket/motherboard change. "See I don't change motherboards, see, look at the money at the money I saved"

What would you rather be doing, cruising down Hollywood Blvd. with "Two Ton Lucy" in the '68 pickup, or "Super Hottie Megan Fox" in a Black Pearl ZR-1? What it boils down to is being stuck with the same old hardware, buying into the same old platform, with the same old slobs. Of course their cheap, they suck.

From an enthusiast perspective, I ask you, what are they saving, a couple of hundred bucks? The way I see it, you can't take it with you, and I've never seen luggage racks on a Hearse.

Dementia has a computer/tech blog and then doesn't consider himself an enthusiast? Anyone who spends as much time and thought on this stuff as we do, is an enthusiast by default. We ALL know what the good stuff is, it's the asshole's who bullshit around it, that make the most noise.

He's just mediocre guy pushing a mediocre chip.... and that's with AMD's best against one of Intel's budget products. As far as I'm concerned, I wouldn't be so proud or happy if it were the other way around. I certainly wouldn't be bragging how good my cheap crap is either!

Here's some advice, Sci, stop the bullshit.

Pulling up to Spago's, Beverly Hills in a black pearl ZR-1 with Magnificent Megan, that what's I'm talking about. There's NOTHING cheap about having the best.

Ho HoIt's also funny how he chooses to ignore power usage in his tests whereas a couple of years ago the CPU and overall system power usage was supposed to be more important than actual performance.

Yeah I did find that a bit strange, I suggested measuring power draw as well. Testing power draw is standard procedure in hardware comparisons these days, I see no reason why it should be excluded. I think Sci's reason was no one buying mid range systems would care about the power draw since it should be negligible in the grand scheme of things.

For proper testing he should be using as identical setups as possible. Different PSUs, memory and case can easily make heat and OC results differ quite a bit.

He's using DDR3-1600 CL8 for both systems I believe? At least I couldn't find evidence to the contrary. Either way I wouldn't expect PSU to make *that* much difference (they're pretty close it seems), and Sci seems to be aware of the differences in his PSUs. Case is obviously a big one (size, location and type of ventilation/cooling, etc), but I was under the impression he has the slight variability between his cases controlled. We'll see.

I haven't read his site for a while, but as long as a reviewer states up front what it is he wants to test and how he plans to test it, I think that's sufficient. If he wants to test systems at a specific price point, and test performance in a specific range of apps, that is valid.

Few people are buying strictly based on performance, and hardware enthusiasts usually pride themselves on getting the best deals and best bang-per-buck. I almost never buy the most expensive hardware- I try to get the best deal, or I try to find a level of performance that falls within a dollar range that I set for myself.

As for proper testing techniques... heh. I'm sure that for some people, proper testing is more about results than technique. A smart buyer learns to tell which sites produce questionable reviews, and learns to look at more than just one site for reviews (unless your confidence in one site is just that high). If your outlook on computer hardware is more political than practical, this also can have a big effect on how you view review sites.

Tonus, don't you have a i920/LGA 1366 setup, for a year if memory serves?

The differences between the Lynnfield core and the Bloomfield core are:

The third memory controller is disabled, never made it through the binning process, or was never designed in.

There is only one PCI-E 16X lane which means ANY SLI or Crossfire setup will DOWN shift to 2-8X lanes only.

"Turbo Mode" has been locked.

The difference betwwen the P2 965 and the i5 750 is 700MHz. As a result, the Intel chip will get trashed on most single treaded apps. This too shall pass. Singled threaded apps are going by the way of the Dodo bird.

Intel buyers/enthusiasts, be advised.

The budget Core i5 is tempting for a quick budget rig but upgrades will be limited.

Anyone contemplating an upgrade to i5 from Core2 should save the extra bucks for a LGA 1366/ i920 Mobo Combo (The way TONUS did) where a SERIOUS upgrade furture is assured with UNLIMITED options, especially if you're considering gaming/photo/video editing with multi graphics cards and BIG high resolution monitors.

LGA 1366 IS STILL the weapon of choice for most enthusiasts. Leave the LGA 1156 rigs sqaurely where they belong, in the corporate sector, with the mom and pop machines, in the "White Box" gulag, and with cheap competition for AMD's best.

Sure, dollar for dollar the little Lynnfield even trashes a stock clocked Quad 9550. That was Intel's objective, however, a cost effective replacement for LGA 775.

I think Sci's reason was no one buying mid range systems would care about the power draw since it should be negligible in the grand scheme of things.

By that reasoning the only time power matters is in laptops and netbooks where AMD doesn't have a dog in the fight.

Forgive the skepticism, but it seems a bit convenient to say power doesn't matter now, but in the net burst era, power draw was a huge issue. I don't recall the cost per kW of electricity falling all that much in the last few years. The only thing that seems to have changed is the performance-to-power ratio of AMDs chips relative to Intel.

SparksThe third memory controller is disabled, never made it through the binning process, or was never designed in. "Turbo Mode" has been locked.

The third memory channel doesn't appear to exist in the die itself. See here: http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3634

Turbo mode locked? Do you mean turned off, or? It is certainly functional in the i5-750 (up to 3.2GHz with 2 cores active, 2.8GHz with 4 cores), see here: http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3634&p=5

What will be interesting is to see how often turbo engages during Sci's testing. I have a hunch that he's going to try to increase the ambient temps in the cases of both machines and observe performance. Some of us experience ambient temps of 35C in Summer, which will impact a CPU with Turbo Mode in a closed case, without a doubt. This causes "hey my computer is faster in Winter!", which personally I find a bit weird.

InTheKnowBy that reasoning the only time power matters is in laptops and netbooks where AMD doesn't have a dog in the fight.

Yeah it's a bit odd. It seems he's going to assess power consumption via thermal testing. A bit vague. I'd prefer to see raw power usage numbers (whole system of course at the wall, I don't think it's reliable to try to isolate components within the system, unless ofcourse you're some hardcore electrical engineer ;)). Simply adding up all the rated power numbers for each component does not give an accurate idea of real world power usage.

I'd prefer to see raw power usage numbers (whole system of course at the wall, I don't think it's reliable to try to isolate components within the system, unless ofcourse you're some hardcore electrical engineer ;)).

"Thermal testing" is not only vague, but it opens a whole new can of worms. You get into cooling rates, design of the heat spreader, processor layout and a whole host of other differences. Seriously, how do you ensure the heat paste is evenly spread on both systems. And it's really just a lot of hand-waving anyway. Long term power from the wall is what really matters in my mind.

I'd be completely happy with measuring power from the wall, but I'd go one step further. If you really want to see what a system is going to do in "real world usage", then you need to measure it while you use it. I'd like to see total power used off each system over the course of a week of actual usage.

You could even normalize the data for time used by reporting W/hr.

No need to go out of your way to do anything extra, just plug it in and use it for a whole week. At the end of two weeks, you'll have a real world number that will tell you if both systems are really within spitting distance of each other.

By the same token, I think that measuring the number of recharges on two laptops with the same battery would be a good indicator of real world power consumption as well.

The beauty of the approach I'm proposing is that it gets rid of the whole "it uses more power but for a shorter time" part of the argument. There is no need to determine when to start and stop the measurement. As long as the usage patterns are similar, I think it is fair test.

There really isn't any need to out think the room here. If you can sell an idea as simple as my proposal, though, I'll be pleasantly surprised.

Not so fast. AMD's sugar daddy Dubai has 60 to 80 billion in red ink, no doubt. However, Dubai itself has a big daddy, too, the UAE. They're the ones with the oil and the real money. They'll pull the cookies out of the fire, keep the ski slopes in the desert, the man made palm tree shaped Islands fertile, the worlds tallest building going up, and of course, their little venture into the chip business solvent.

Sparks: BTW, I can overclock my air cooled QX9770 from 3.88 GHz in the summer to a very brisk 4.27 in the winter, believe it or not.

LOL - you must be either a dedicated or medicated overclocker to leave the window open during the NY winters :).

Haven't seen many recent oc reviews on the i9 Westmere, but if the initial 5GHz oc's on air cooling with the ES chips carries over to the released product, that would be the one to go for :).

Wonder where all those AMD fanbois who claimed Intel was in for a world of hurt with the UAE backing AMD, are now? I'll hafta dig up some old posts on THG and remind them ever so gently of the errors of their ways :).

This link gives a quick and dirty review of the i5-750 compared to the PHII X4 965. According to the review, the systems were nearly identical retail systems. They found the performance differences to be pretty much a wash. There was about a $100 cost advantage for the AMD system.

The thing I found interesting was the "irrelevant" power usage testing. Power was measured at the wall with the systems idle and while running a gaming benchmark.

Intel came out on top: where the AMD-based Fusion HD 965 consumed 103W at idle and 163W under load, the Vortex HD 750 required only 59W and 120W respectively.

With a 43W difference both at idle and at full load, I can see why some of the more rabid types might not want to look too closely at power consumption.

"With a 43W difference both at idle and at full load, I can see why some of the more rabid types might not want to look too closely at power consumption."

I recently had an experience that certainly changed my outlook on power consumption. (Albeit for a moment, and a very short one for that. The only way they'll get my 1 kW PS from me is to pry it from my cold dead fingers)

However, even the rabid types (myself inclusive) are forced to take a look at the power issue. Perhaps not from the comparison of these two low end bread and butter dogs which wouldn't allow in my house, nevertheless, I had to take a second look. We are talking volume here.

When we electricians buildout Data Centers for large companies, the electrical power requirements, voltage, plug types, ampacities, and phase load balancing are done exclusively by the electrical engineers. (They are definitley not the caliber of engineers on this site. Such a comparison would be ridiculous.) But when compared to the Cro-Magnon's wielding pipe benders, hammer drills, and sawzall's (such as myself), they are absolutely necessary.

Just last week I was assigned the task of adding 7 4P servers with their respective hard drive arrays to an existing room utilizing the existing circuits (homerun's) under the raised floor. They were adding about 100 stations, outside, on the floor. Big deal, I thought, change a few twist-locks, balance the load at the panels, pull out the Amprobe, and check the ampacities on each circuit, piece of cake.

Dead wrong. The HP ProLiant DL580 G5 servers were pulling around 7 amps each. (These things are big and heavy!) The hard drive arrays, 3 of them per server, with about a dozen or more SCSI drives each, were even heavier! They were pulling 7 amps! Then there were the backup switches. Hello! Not good. I'm within 80% of the circuit load, while not bad, it's certainly not within my comfort zone. That's all I needed a tripped breaker on start up. Not on my watch. I opted to split the servers with their respective arrays with two separate circuits.

What was supposed to be a "standby" milk run, turned out to be OLE' Sparks running around and popping floor tiles hunting down seven additional UPS homerun's to feed these bad boys. Sure, I got fat on the OT, no doubt. However, I got a lesson on power usage. This stuff adds up and the customer WILL see it in his electric bill.

Considering the cumulative power these things draw, not to mention the 100 or so work stations on the floor, does power matter? You bet, 43 watts times 100 machines, servers that can scale back power on the fly, and SSD's as opposed to 15,000 RPM drives, fuck'n-a bubba! Every bit adds up and counts.

The issue is at best arguable for a Plain Jane home unit even with my "rabid" XE home machine. However, power is a BIG issue when you start scaling these things up. A lesson I learned the hard way.

(It took us 12 hours to mount, install, bring up the blades sequentially, and establish links to the stations. I asked them if they were Intel machines. I told them I don't do AMD. A couple of the network geeks laughed.)

It sounds like a company trying to deprive me of choices, but surely that can't be right. This is AMD, the consumer's only friend, we're talking about, right? They would never do anything that would limit my choices.

Here's some extremely disappointing news. It seems the Larrabee program as taken a turn for the worse. Perhaps it's not news for those on the inside who blog here, nevertheless, the news is disheartening to enthusiasts, like myself, on the outside.

The unrestrained duopoly between NVDA and ATI, mired in conspiracy and price fixing for years, will continue into the next decade. Here are two companies that have actually have been convicted of causing damages to both consumers and vendors, and ironically, considering INTC's monumental fines, they quietly slimed through the process with a token slap on the wrist. The action was not even reviewed by the EU. Apparently, there isn't enough money in high end graphics for cause. INTC holds the lions share in the low end solutions anyway. I'm sure this is a factor.

The biggest loss is to the consumer/enthusiasts. Inefficient, power hungry, graphic cards whose process technology has obviously approached its limits will not benefit from INTC's cutting edge process expertise. Unfortunately, it's business as usual, where the software is always ahead of the hardware curve. Larger GPU's, where more demanding power and cooling solutions are currently the only remedy, will continue to be the norm.

Personally, I'm saddened. I sincerely hope all is not lost to those who are directly involved.

Apparently what they've done is canceled the release of the first iteration of Larabee due to poor performance. Looks like they'll continue to develop it with the hopes of having a marketable product somewhere down the lines. Shades of Itanium?

This may be good in niche applications where the SW ican and is often tuned to the HW.

But I wonder how well a general purpose core (or many of them) will do competing against HW specialized to perform certain tasks. Even if Intel could maintain a process node lead (which would likely be a half node in the graphics area - for example 32nm vs 40nm or 22nm vs 28nm), I also have to wonder about power - an all purpose x86 vs specialized designs? I think a half node lead would not overcome that.

The only other way it might work is if people tore up the SW and started over and tuned it to the HW. And this strikes me a lot like Itanium.... there is a huge base of SW written for Nvidia/AMD (like X86 space for Itanium) - why would any game developer or OS manufacturer want to do two sets of work?

More foundry dynamics as Samsung is starting to move more aggressively in the foundry business...

They are looking to match TSMC, and GF should be worried. Samsung knows volume manufacturing and scale (from their massive memory production) and they get their technology from IBM too... They see continued flatness (or slow growth) in the memory business and see this as a growth vector.

If you ask Intel folks privately who their biggest threat is, they'd probably say Samsung (and would probably have been saying it for some time now). TSMC still may have the best cost structure (a guess, I don't know that for certain) and GF/AMD sure talks a lot about technology, but if I had to be on some based on a balance of tech, ability to do volume and cost, it'd probably be Samsung.

Given Samsung comes from the memory world where it has been bloody in terms of pricing - a larger move into the foundry busness probably does not bode well for the other foundry players (but is probably a good thing for foundry customers).

Einstein once dismissed the idea of using different soaps for different purposes. He abhorred the concept of one soap for shaving, and one soap for washing your face, shampoo, etc. Of course he was refering to one set of mathimatical laws for the 'big four'. It never panned out.

Why is this relevant? As usual, you boys have me at a disadvantage. Don't they have a grand unified process model? Why is logic process so different from graphics process, so different from memory process? Is the graphic architecture so different from CPU/logic architecture, and why you need different process nodes (half nodes??) for each?

As shown so many time before, what they can do with memory, they subsequently fall flat on their collective asses with logic. I get the "memory thing", so what sets graphics logic apart from CPU/logic? Why would INTC have trouble with graphics when INTC's 32nM is at the gate chomping at the bit?

Naturally, I use an electric shaver and I'm lost, so why the different soaps?

As shown so many time before, what they can do with memory, they subsequently fall flat on their collective asses with logic. I get the "memory thing", so what sets graphics logic apart from CPU/logic? Why would INTC have trouble with graphics when INTC's 32nM is at the gate chomping at the bit?

Sparks, the issue here isn't so much process as it is design. Intel can design the transistors, what is biting them in the rear is the design issues. Designing a CPU and designing high end graphics are very different by all accounts.

I have a friend who used to work for NVIDIA and now works for Intel. He just so happens to work on the Larrabee project. A couple of months ago, he told me that his group was finding bugs faster than the design guys could fix them.

When I talked to him early this week, he was in better spirits than I've seen him in for quite some time. He assured me that Larrabee was not dead, but that Intel had finally acknowledged that doing graphics was going to be hard and has now set realistic timeframes for Larrabee development. So rest assured, the project isn't dead, it is just being reset and given realistic timeframes (which haven't been released yet). I didn't press him on the time frames.

It is important to remember that Intel is trying to break the mold on GPU design with this beast as well. So not only are they trying to design a state of the art GPU from scratch, they're trying to invent a better mousetrap at the same time. My friend believes this is doable, just not on the kind of wishful managerial schedule that Intel rolled out.

I found out about the cancellation of the current Larrabee product with the rest of you when it was made public, so it was news to me. Although, I wasn't completely surprised since the writing was on the wall. Manufacturing doesn't get as many details from the design group as you might think, but it isn't hard to put 2 and 2 together.

Sparks said... Why is this relevant? As usual, you boys have me at a disadvantage. Don't they have a grand unified process model? Why is logic process so different from graphics process, so different from memory process? Is the graphic architecture so different from CPU/logic architecture, and why you need different process nodes (half nodes??) for each?

It's like the difference between building a jet engine and a rocket. A lot of similarities in the overall concept, but extremely different on the implementation level.

I don't have much experience on the design end, nothing more than a few VLSI and layout classes in college, but there is as much art to it as there is science. Debugging logic requires a lot of time and effort. You don't have a C style debugger to step through your "code" and make changes on the fly. There are certainly software simulation tools, but it's much more complex since you have to deal with timings, clock trees, power and heat distribution that effect performance.

It's not terribly uncommon to have a piece of logic that works very well in a hypothetical software simulation with ideal timings etc, but when put into a real process, all sorts of shortcomings and bugs may appear.

The purpose of half nodes are a way for some foundries to quickly get a "dumb shrink" to market. In their business model, it may be more economical to get the shrink at a relatively lower development cost. However, things are changing quickly as we move to smaller and smaller nodes and even this strategy is coming into question. Regardless of what method foundries and semi companies use, it's going to continue to get exponentially more expensive to push smaller.

Orthogonal, ITK, thank you. I had no idea graphic architecture had a completely different set of design rules. I HATE to say this, but that blow hole CEO at NVDA was right,---------for now.

Brilliant analogy, by the way, jet engine vs. rocket engines, not to mention the plethora of hypersonic designs in between.

Incidentally, did I say 32nM at the gate chomping at the bit? Well, good news for INTC for a change. Pricing for ROCKET powered Clarksdale has popped up on the web. Reportedly, the graphics component has a 50% improvement in performance than the current INTC integrated graphics solution. At 150 bucks a pop, these will undoubtedly take the low-end gulag by storm.

http://www.hexus.net/content/item.php?item=21541

Here's to Larrabee, hoping its' second incarnation will be ready for prime time in the not so distant future.

It is looking like "gate first" might not be all it is cracked up to be according to this report. If the report is accurate, it would seem that members of IBM's fab club are pushing for IBM to provide a gate last process.

Some of the more juicy bits follow.

In a session on high-k challenges at IEDM, Thomas Hoffmann, manager in the CMOS platform technology group at IMEC (Leuven, Belgium), said both the gate-first and gate-last process flows have delivered a resumption in scaling for the oxide inversion thickness (Tinv).

Assuming, of course, you ignore the inconvenient fact that no one has the gate first process in HVM.

Although the gate-first approach more closely resembled the process flow of the pre-high-k era, problems have cropped up, Hoffmann said in a Sunday short course presentation. At various technology conferences this year, researchers have discussed a rolloff of the flatband voltage, shifts in the PMOS threshold voltage, and interface layer regrowth. "When the metal sees a high thermal budget, it has an impact on the work function," Hoffmann said. Importantly, the problems created "fundamental issues for mobility, probably due to remote Coloumb scattering. It takes a fair amount of work to improve the quality of the layers to reduce these changes."

I believe Guru was pointing out some of those issues long ago on this site. But hey, what do we know, we're just rabid Intel fans.

Another source said the gate-first approach has yield issues. The capping layer is only ~5 Å. Defects are created from debris generated from the capping layers. Those particles impact yields "and can be the difference between profit and loss for a foundry," he said.

With such glowing praise, one wonders why this process hasn't been in HVM for years. I'm sure this has nothing to do with IBM's long standing track record of producing a part of the process that looks really spiffy and then won't scale worth a darn due to integration issues. Another point that has been brought up here repeatedly.

Both of the gate formation approaches have their problems, and there is no doubt that the gate-first approach is significantly simpler

It'd be easier to just paint on a layer with a roller... but that doesn't mean you should use that approach! What good is simpler if it doesn't work? It'd be simpler to just run all the wiring in my house through one really big ass circuit breaker, that doesn't mean that's the approach I should go with.

I hear NASA is designing a new shuttle to go into outerspace and in parallel someone's working on alternative paper airplane. When asked about the 2 approaches a NASA official said: "Both approaches have their problems, but no doubt the paper airplane is significantly simpler!"

Nobody has such a low EOT for a 28 nm LP process I assume LP = low power

kind of neglects the fact that there are no 28nm gate last technologies to compare against as Intel doesn't do half nodes... :) I'm not sure if there is a 28nm TSMC data point but the "comparison" is kind of hollow as nothing is in production at that half node. I also notice he calls out LP and not the normal high performance process.... (if there were Vt fluctuation issues they'd probably be more significant with the high performance processes)

Finally, the gate last is not a drop in - you could be talking totally different materials (esp some of the metals) and you have new challenges (gap fill of extremely small features, new etch and CMP steps). It's not like you just install one new tool and poof you have gate last.

Quite frankly this could not have come as much of a surprise to IBM - though one thing to consider is maybe they couldn't make gate last work either... while the gate last approach lends itself to flexibility around material choices, it probably is a more technically complex solution to implement - and would require IBM to re-tweak their capping and metal layers.

What I hadn't realize and is a bit surprising is the defect issues mentioned in the article from the capping layer - I wonder if that is specific to the capping layer film, it would seem the process of putting down a thin film is a solvable problem... I've always assumed things would eventually migrate to gate first in the long run (the gap fill for gate last I think may hit a wall eventually), but if there are defect issues with the capping layers now, they may just get worse at smaller dimensions.

I seem to recall some discussion over the AMD settlement, where Intel gets the rights to use AMD IP and vice-versa. If so, wouldn't that include the GPU IP that ATI holds? So Intel could actually design & build a 5870-type GPU without fear of patent infringement...

"Toshiba manager said. Intel was able to restrict the layout of its poly gate lines to one dimension because of its in-house coordination of the process and the design rules."

Heh, heh, heh. I could be spit balling it here, but haven't we said this along?

"Only real men have FAB's"

Hmmm, they said it too!

ITK, sounds like they've been over cooking those delicate gates.

And, what the hells the difference between FDSOI and Fully Depleted "Extra Thin" Silicon On Insulator? Fully depleted is fully depleted, and partially depleted is partially depleted. This much I've got. Where does the "ET" come in? What am I missing here besides a "new" IBM term designed to suggest that they found a better/cheaper way to implement SOI?

INTEL 32nm has a mean looking logic process. They've milked strain to get PMOS currents as good as NMOS. From what I can gather no other manufacture has transistors that are in production that even close. Faster transistors means ability for designers to start with a better building material. My analogy for you geek wannabees is think of it as higher horspower engine. You can tune that engine for more horsepower or more MPG. No matter what you got a better engine. Now its up to the designer if they use this superior engine for a better car. We'll see soon enough next year when Westmere launch and Sandybridge soon after that.

INTEL has a 32nm SOC technology and that looks pretty damm good. Watch out Foundries ( samsung, Global, TSMC ). THis one is High K / Metal Gate and GATE LAST! Last I checked no foundry will have HighK/Metal Gate for another couple years at best in production with yields of what INTEL is likely getting.

Intel is what shipping 2nd generation production now from a couple factories with two more to start production. And where is the rest of the world. They got notta, NOTHING, not one wafer or sample die yet. Read on and I will predict there will be some delay.

The most interesting thing is the rumblings that gatefirst ain't looking so good. Lots of grumbling from GF, TSMC and others. I smell another repeat of SILK.

I told you guys this long time ago, consortiums don't work, 1+1+1 is NOT equal to 3. It is just 3 people with 3 different priorities arguing about who gets to drive. The other two get shafted. Guess who is getting shafted at the moment. The reality is one person got money in the bank and the other riders they got a problem as they got no technology.

And my comment about whether gate first can or will work. I ask you to go back to your thermodynamic books and read up on the phase diagrams for the rumored gate dielectrics, buffer layers and gate electrodes that the consortium are using. Then go look up at any silicon processing book at the processing temperatures required to form good source/drains. Might convince you of what path you want to take. One that isn't thermodynamically stable, or one that requires engineering invention.

Tick Tock Tick Tock. Is that apple I hear that will embrace INTEL at somepoint for their smartphone? The processor there is going x86 just like servers did. It will be x86 everywhere because the momentum is too great and the cost to develop competing technology is simply not possible unless you have it already.

"My analogy for you geek wannabees is think of it as higher horspower engine. You can tune that engine for more horsepower or more MPG."

Not bad. See, I knew someday you'd be able to communicate with, and reach the masses.

However, in this analogy, with engines, you can have both. Simple actually, just increase the compression ratio, bango presto, you can have your cake and eat it.

You gotta know how to STRESS the engine.

Exotic materials and alloys are mandatory.

The engineering has got to be spot on.

But most importantly................................................................................................................................................................................ You gotta have the HIGH OCTANE gas, baby!

read up on the phase diagrams for the rumored gate dielectrics, buffer layers and gate electrodes that the consortium are using

What materials are they using for the gate electrodes?

The high K is stable at temp... this should be obvious because Intel's first gen of high K puts down the HfO2 first too (and thus it sees the high temps you are so concerned about from a thermodynamic point of view). It is only the gate (which includes the work function metal) that is put down last. Gate first or gate last technically refers to the GATE not the GATE OXIDE... (on 32nm Intel chose to put down both the gate and gate oxide last)

The capping layers reported at IEDM are both oxides easily capable of handling typical S/D and other anneal temps. In fact LaO is one of the high K's being looked at to replace HfO2 eventually and Al2O3 is used fairly commonly in DRAM devices.

So again what gate materials are they using and is that what you have the issue with?

............meanwhile, while we wait for an answer, ASUSTEK and INTC have a little answer of their own, especially to a geek wanna be like myself. (go easy on him G, he means well) This is where the rubber meets the road, folks.

"General manager of ASUS' motherboard business, Chie-Wei Lin, reckons Gulftown and X58 is on track to become one of "the fastest personal computing platforms in history", .....................and

"the chip is essentially a 32nm, six-core derivative of Bloomfield that's armed with hyper threading and 12MB of L3. That, on paper, makes it a 12-thread CPU that's on track to become the most powerful option for the enthusiast buyer."

The article also lists those motherboards, along with the required BIOS upgrade, which will be compatable with MONSTER Gulftown.

Mmmm, some nice 2133 MHz Tri-Channel Memory is in order to keep this bad boy well fed. At 32nm I see a nice clean daily overclock to 4 GHz.

How many people for 'technical' people within the zone to know that Abinstein is wrong / spreadng FUD on Intel Turbo boost? 2 -- Kaa and Zaphod. The rest seems to not able to read the white paper and simply too convinient to quote the conversations out of context. This including Hyc (purposely state his name here since he might visit here :))

It is all started with this wrong/FUD statement:http://www.amdzone.com/phpbb3/viewtopic.php?f=52&t=137242&st=0&sk=t&sd=a

Also when turbo mode engages, all cores in the chip are forced to max frequency. Why would I want all cores burning energy when my workload consists of only one or two threads?

Kaa try to explain that to them and Zaphod even show the Turbo at work with Video ... well, Kaa is attcked by the rest why Zaphod video are ignored :)

abinstein is clearly wrong on this first (FUD)attempt. If his workload is one or two thread, there the rest are likely not even in C0/C1 state (C3/C6). (how come no one else that are supposed to be technical to catch this?) :)

Then Abinstein 2nd try

The point is with turbo mode, your core will either be at max frequency or in C3/C6 (deep sleep) state. and the rest ... see it for yourselve

Hyc claims that Kaa miss understand Abienstein statement even one of hyc description seem to be correct. Since Abinstein is off from what Hyc said, i wonder why Hyc continue siding with Abinstein :)

Anyone follow through the whole threads would need to see what part of Abinstein statement being wrong, because no one except Kaa and Zaphod have told him that his first FUD is wrong. Then the 2nd one are actually tightly supporting his own first FUD, which is wrong. Hyc, unlikie Abinstein, seems understand what the real Turbo is, somehow still siding with Abistein (explain the Turbo in away making Abinstein looks right). Hyc, I suggest for you to look through Abinstein statements in the whole threads then you would clearly he understand nothing on the Turbo ... he even make statement like this:The point is with turbo mode, your core will either be at max frequency or in C3/C6 (deep sleep) state. You have to enable speedstep probably because that is how C3/C6 states are enabled.

This just prove that he do even know how the Turbo is engaged. Turbo boost plain and simple (and you know that), it engage only with P0 state. And with proper OS power policy. There are multiple Pstates in which Turbo mode is not engage. I do not think that Kaa is catching Abinstein english here. Abinstein really thought that the systsme would be either at base freq-turbo max freq or being idle. Just read through all his threads in that discussion that you know what i mean.

Phase diagrams are nice... but are they applicable? It's not like these anneals are done at elevated for long periods of time and equilibrium is achieved. Instead of looking at the phase diagrams and relying on thermodynamics, perhaps looking at the kinetics might also be necessary?

You also might consider that even if you were at temps long enough (which we're not, but let's pretend we are for arguments sake), perhaps an ultrathin film a couple of atomic layers thick would not behave the same as a theoretical phase diagram describing a bulk material? Just a thought...

And then of course you have the manner the films are put down and anything that may be done to stabilize or introduced into the film during the deposition process... If for example the HfO2 is put down with a halide precursor vs an organic precursor and has different levels of halides or organics contamination in the film or say if it is lightly doped with a silicon precursor to form a very low ratio HfSiO2... do you think the phase diagrams account for this or are they representing bulk, ideal materials.

You also have any stresses on the film (from either above or below) which also could effect any phase formation/transformation and impact the nice theoretical diagrams you refer to (and to make this even more ridiculous - this once again assumes you're at temps long enough to have a phase transformation).

But of course most everything else is spot on... well except for the TSMC part... oh and the materials part... oh and the... you know maybe it will be quicker in the future to just state what you get right?

Yeah, I'm being a jackass, but only because when you tell people to go read up on the things and speak authoritatively on subjects that you kinda understand but can't apply correctly it gets old... It's one thing to just state an opinion, it's another to butcher the science and completely misapply concepts and speak in a manner where people are lead to believe you know what you are talking about.

There's already a site for folks like that - it's called AMDzone.

BTW - have you tracked down the actual gate materials IBM is using yet.. I guess since you already checked the phase diagrams I would get a near instantaneous response.

It looks like Commissar Feinstein will stop at nothing short than destroying that last bedrock of American strength and market leadership by continuing its ridiculous crusade against Intel.

One commenter on the article put it best:

"The complaint itself reads like an accusation against the company for having a successful business and generally engaging in competitive enteprise. And the idea is ridiculous that bureaucrats can know what is a superior product. This is a brazen override of consumer preference."

"And then of course you have the manner the films are put down and anything that may be done to stabilize or introduced into the film during the deposition process... If for example the HfO2 is put down with a halide precursor vs an organic precursor and has different levels of halides or organics contamination in the film or say if it is lightly doped with a silicon precursor to form a very low ratio HfSiO2... do you think the phase diagrams account for this or are they representing bulk, ideal materials."

Precusor this precusor that, stomic layer this vesus that... Hmm sounds like a guy well versed in defending to managment gate first. I can't wait to see the first billion transistor CPU with gate first and watch it scale to 22nm and beyond. It'll sail smooth as SILK, LOL

Precusor this precusor that, stomic layer this vesus that... Hmm sounds like a guy well versed in defending to managment gate first. I can't wait to see the first billion transistor CPU with gate first and watch it scale to 22nm and beyond. It'll sail smooth as SILK, LOL

So basically you just made up the science in the vain attempt to make it sound like you know what you're talking about and as if you have some knowledge (other than what you can read on the web) in this area.

FYI - gate last is the superior approach (at least right now), my issue is not with your opinion it's with your ridiculous use of science to support it.

I dare you to find in a single posting on this site where I've defended the gate first or said it is a better approach, but I guess when you sound like a complete idiot on the science and can't argue the facts when your made up BS is exposed, the best approach is to attack the messenger (and for the icing on the cake, do it incorrectly - I'm not a proponent of gate first at all!)

BTW - ever find the gate materials IBM is using? I would have thought since you told us to go check the phase diagrams, that would mean that you had already done this... but since you don't seem to know what they are using (not the gate oxide or the capping layer, but the gate itself), I'm curious as to how you were able to identify there would be issues around the "phase diagram" and thermodynamic stability without actually knowing the gate material.

No need, there is no explanation. It's completely illogical and irrational to believe that a bureaucrat or regulator can do anything right for a market. The whole point of a "regulator" is to control and shape market outcomes because people are "greedy", "evil" and "untrustworthy" to do what is right for consumers and the market and they must therefore step in and take action.

Of course this is preposterous, because if everyone in the market has these traits, then by definition those in power to regulate are also greedy, evil and untrustworthy to do what is right for consumers and the market.

Greed is part of the human condition, people will always adapt and game any system in place to do what is most beneficial for themselves. (Which is why the state should be removed from the market) Greed is not descriptive of human action, but instead prescriptive. When this is recognized, the best system to have in place is one that pushes greed into productive uses, not regulate it. When this happens, the most profitable ventures are those that best serve and produce for the market.

Intel is not a monopoly, never has been and has never been accused of any act of aggression. They are also the most profitable semiconductor company on the planet and are therefore serving the market better than anyone else as determined by consumer preference.

Moores law advances and products continue to plummit in price and all benefit. If AMD can't find a way to carve out a profitable position, too bad. Keep the FTC out and let AMD fail if necessary and allow someone else to purchase their assets and IP for a song which will be far better for the market in the long run than keeping a wealth destroying company alive on corporate welfare.

Hmm, I heard on the news reports that the FTC is _not_ going after any fines or damages against Intel - most of it seems as if they are trying to formalize the agreement Intel already made with AMD in their private settlement. However there seems to be additional stuff that Intel will fight in court in September. Haven't read all the details but a lot of this seems unnecessary in view of the private agreement.

However now I bet Cuomo will get a head of steam up his arse and go ahead with the NYS suit - they need to get back that $1.4B handout they gave to Hector & the UAE/AMD/GF gang of morons.

I'm not sure if Cuomo can do that. From the blog at Khorgano's link, the FTC is essentially the judge and jury as well as the plaintiff. They can build a flimsy case (ie, alleging damage to 'the market' or 'consumers' without offering specifics) and still find against Intel.

I believe that Cuomo would need to allege and prove specific damages, and AMD is unlikely to want to cooperate fully now that they have an agreement with Intel (and a $1.2 billion payout). How many OEMs will want to cooperate if AMD isn't? None, would be my guess. Without cooperation at that level, the AG would have to rely on pricing levels for evidence, and pricing levels for CPUs and related hardware have continued to drop over the past years.

Heck, the only thing that has kept prices as high as they are is software. I guess Microsoft is in for another round of lawsuits and investigations any day now. IBM is facing anti-trust charges as well, aren't they? Looks like the tech sector is being gouged for revenue.

“BTW isn't it almost Christmas, where the fuck is IBM's high K process. Didn't they annouce it with great fanfare 3 years or so ago?”

IBM? Allow me to keep you up on current events. The whole industry, besides INTC of course, is falling on their respective asses with 45nM or less.

As you know, it took years for AMD to finally get competitive @ 45. Compared to INTC, their power and thermals are still in the toilet.

Whether TSMC has gotten its act together still remains to be seen. The initial shortage of AMD’s new 58xx series seem to have passed despite nearly two months of delays. I suspect TMSC didn’t have the volume production upon initial release. From what I’m reading the yields sucked. Any way you slice this thing, they certainly are having their share of problems. How bad? Try this bad.

“Let me repeat that, out of 416 tries, it got 7 'good' chips back from the fab. Oh how it must yearn for the low estimate of 20%, talk about botched execution. To save you from having to find a calculator, that is (7 / 416 = .01682), rounded up, 1.7% yield."

"Nvidia couldn't even hit 2%, an order of magnitude worse than the most pessimistic estimate. Ouch. No, just sad. So sad that Nvidia doesn't deserve mocking, things have gone from funny to pathetic.”

1.7 %, Jesus H., it’s time to rethink your business strategy! They should consider making and selling plastic novelty doggie doo doo. There’s more profit per unit!

From the general consensus of the industry birds that flock here, things aren’t going to get any better, especially at 32 and lower. I get a BIG feeling things will get worse.

Whew! INTC has hit another homerun on so many different levels. If the first generation of ATOM shook the likes of Nvidiot like a ragdoll, and left AMDriod sucking its thumb in a corner in complete impotence, then only one observation comes to mind…………

The FTC

If anyone doesn’t believe that this was INTC’s bone of contention reaching a settlement with the FTC, then a serious review is in order. The timeline fits, Larrabee was convently dismissed as a failure, and article 17 and 18 of the FTC’s complaint reads like the 11th and 12th commandments set down by Moses himself.

This REAL PLATFORM SOLUTION, is not only 50% smaller and 20% faster than the last generation, it will make the AMD’s 5.2 billion dollar dream/acquisition truly a fiasco. Add to the mix dual cores, for shits and giggles, the entire industry has been blindsided by the ATOM platform. Make no mistake. They’re running scared and screaming for government intervention.

This thing has been coming down the tracks like a damned runaway freight train, the industry saw it, and so did the FTC. INTC HAS CORNERED THE MARKET in this area.

PERFECT!

THEY COULDN’T STOP INTC FROM DROPPING THIS BOMB!

In the CPU/GPU world, where 18 months is life time for a product cycle, by the time the FTC and INTC hash out this dispute, ATOM’s “damage” to the market, the industry, and consumers will be irrevocable! (This is why the FTC is so keen to expedite the process so quickly.) The cows are out of the barn and it’s too late to haul them back in.

The industry has geared up for this big time and the other players are merely spectators. That bitch from NVDA knew it all along.

Stifle innovation, hurt consumers my ass. This revolutionary, inexpensive platform in conjunction with a trimmed down version of “obsolete” MS XP is about to set the compact mobile world on fire. OLPC, think again. One Netbook for every school age AMERICAN KID! That’s what I’m talking about.

(G, your “Classmate” (pun intended), Mr. N hasn’t a clue of what he started. If he does, he must be one miserable bastard.)

Enjoy the fireworks. This is going to be my daily daytime soap opera dream come true.

While INTC runs circles around the industry with ATOM processors, I thought it may be of interest for a small follow up on Prof. Negroponte’s progress on third world educational media devices.

Not only has he been instrumental in shipping over a million of these units, he is also planning a second and third generation of the XO-1 which includes a tablet design.

While seemly unconcerned about domestic education or domestic markets, I’d say from the momentum he’s gathered internationally, coupled with his tenacious and altruistic desire to educate the third world, he has been quite successful despite the programs early difficulties. In fact, he just may be looking down the barrel a Nobel Prize for his efforts.

“Sparks, is this your new board?”Actually, out of fear of “hogging the blogging”, I had this response to TONUS’s last two posts. I suspect everyone was either on vacation or in holiday mode.

“TONUS, while Clarksdale and ATOM are very impressive mainstream products, I thought perhaps a view from the other end of the product spectrum may be in order. EVGA has previewed an absolute monster of a motherboard. It’s a dual socket, X58 based affair, with no less than seven PCI Express slots.

Its release is obviously timed with the release of 6 core Gulftown coming at the end of this quarter. Imagine the prospects; 12 cores, 24 threads, 24 gigs of memory, and 4 kickass graphic cards? Talk about the other end of the spectrum! Talk about overkill! This thing will be a great foundation for workstation to die for.

Surely Apple will get there first, but this gorilla will sure to follow.

She’s a beaut’, isn’t she? I am sorry to say I don’t have the chops or the skills (like working for INTC, Boeing, NASA, or Pixar (for that matter), to unleash all that juice. I wish I did!

One small caveat, incidentally. I am compelled to agree, that SEVEN PCI Express slots with NO PCI-X slots or any PCI slots is as about as useless as tit’s on a bull, especially for high end work stations.

Hell, unless WIN7 which I recently purchased for my Gulftown build up come 2nd quarter does a good job of utilizing those extra cores, an i7 975 (or less) may be a more pragmatic purchase for some more mainstream folks.

I’ll bite the bullet and get the 980X anyway; same cost, 32nM, and ALL if not more Nehalem goodness. More importantly, another long term upgrade solution is my main reasoning.

Hey, they said a QX9770 quad was overkill a few years back. (Remember, it had 1600 MHz native FSB) Look at the mileage I got out of that thing. Besides, now they’ve stuffing quads in laptops, and currently, quads are SO main stream. It makes all the damned naysayers look foolish. Does it not?

Better to be ahead of the hardware curve than behind. This is gospel. A lesson my beloved INTC learned the hard way.

Just when I think Charlie D. has gotten his act together, he proves the old adage you can’t teach an old DOG new tricks. I his case, he leads by example, like a theoretical proof mathematically describing singularity at the edge of a BLACK HOLE. The mind has no depths!

As you recall he was ‘Dancing in the Isles’ with AMD back in 2007 when Barcelona was falling on its ass. I think Henry Richard would give Charlie a spin around the block in the Ferrari with a cute “Girl Friday” in his lap. Charlie would then print anything.

He is, as journalist go, quite well informed and he does have an inside track to the real players. He does admit when he is wrong, “You will recall I lost a bet with Rahul at Voodoo PC about Dell using AMD processors, and this is my public penance.” Yeah, it was a pink bunny suit at an IDF. Big Paulie at INTC response was, “It's a good thing you write better than you gamble otherwise you'll be living in that outfit."

Enough said.

He’s started a new site correctly named ‘Semi Accurate’. I thought perhaps he would be more accurate than he was at INQ. I was wrong. He’s at again. This time with a photo of, get this, a 28nM wafer from Global Foundries and he clearly states it’s not memory.

This Christmas my wife bought some lovely wrapping paper. You know the holographic, multicolored plastic that bends light just like a wafer. I thought I’d paste a sheet on a circular 450 mm piece cardboard and tell everyone that it’s made at 22nm under EUV and how good it looks. Oh, I almost forgot, it’s not memory.

This time with a photo of, get this, a 28nM wafer from Global Foundries and he clearly states it’s not memory.

Sparks, this isn't really unreasonable. There is not really much difference between 28nm and 32nm. 28nm is considered the 1/2 node on the way to 22nm and is pretty much a 32nm derivative. And if I remember right this process is supposed to be bulk Si.

This would actually correspond pretty well to the announcement from CES that Qualcomm would be using GF to manufacture it's Snapdragon ARM based chip. They also have another contract (with STMicro I believe), so you should be seeing early revs of 28nm Si if they are going to have stuff out by 2H'10 as announced.

4 hours talk time and 300 hours standby. Not earth shattering, but livable. And this is built on 45nm. When Intel moves Atom to 32nm and then revs the design (Medfield), I think they will be fairly comparable to ARM.

On the other hand, I'm seeing glowing reports on the ARM A9 architecture's capabilities. This should be mainstream by the time Medfield hits the market. It is even possible that ARM will be pushing the generation after A9 by the time Medfield hits the market. Though judging by the rate at which A9 solutions are rolling out, I'm skeptical.

“GLOBAL FOUNDRIES has been way ahead of the curve with process tech when compared to any other foundry on the market”

, it gets under my skin in a big way.

Is not INTC a foundry? “Any other” by my understanding is an absolute which is, by my estimation, absolutely wrong. Call it the associative law of horseshit.

Sure, when referring to low power/low frequency, significantly less complicated process and architecture than 32nM Core i7, perhaps. (Hey, you guys created the monster in me that knows the difference.) However, HV commercial success at Global Foundries still remains to be seen on bulk, be it second quarter or later. Further, as you pointed out, I seriously doubt INTC is insensitive to market projections or resting on their laurels at this juncture, be it now or six months from now.

This is why I believe Charlie is once again ‘semi accurate’ in his report. And, this is all from a pretty wafer that “looks good” without a working device, as opposed to Atom which you saw nearly two years ago storming the market like a damned runaway freight train.

There is simply no way to stay ahead with design on what IS going to be an inferior process by a lot.

INTEL with x86 is behind right now, but between the dollars, volumes from x86 the smartphone market will be theres.

Apple may be doing their tablet without INTEL, like they did their iphones, but in 4 years from now on 15nm when the only company producing volume yielding chips WILL be INTEL, they will all be at Chipzilla's doorstep.

We will all have a smartphone running x86 in 4 years, it will dock with your power server in your home that will also be x86 and it will connect wirelessly to your TV, stereo, and other laptops, all those will also be running x86.

You can take that to sleep and when we wake up in 2015 I'll be here to tell you, sharikou, and all them other geeks living in their momma's basement I told you so

TWO YEARS AGO, time flies, damm that must have been prototypes on 45nm development. IBM claimed to be first, claimed to have some elegant better flow if I remember correctly. I wonder where that product is.

has it shown up in some double secret mainframe or server? Maybe they are only using that good fast transistors for their on secret products and don't give the best for AMD, GF or the rest of the consortiums... Hmm.. in two years you can take a new technology node from feasibility to a complex CPU, at least INTEL does that, thats what they claim I think

“TWO YEARS AGO, time flies, damm that must have been prototypes on 45nm development. IBM claimed to be first, claimed to have some elegant better flow if I remember correctly. I wonder where that product is.”

Interesting, I’ve often wondered how they prototype a process. Or rather, how they approach it economically with the least amount of time, using the smallest amount of resources and materials.

I’m sure it’s not one 32nm transistor on a 300mm disc. “Hey, that 32nm transistor was here someplace; I can’t seem to find it!” That would be like trying to find a quarter in Texas from a satellite in orbit and then ask someone to cut the state in half so they can see a cross section.

I figure there’s got to be a better way.

However, IBM gets a working transistor, woopty do. It’s gate first! It’s fast! It uses ULV! It’s breaking the laws of quantum mechanics! Subsequently, they print a couple of million transistors, and then they all don’t play nice or work well with others. All that annealing, stretching, squeezing, I think someone overcooked the coconut macaroons.

Hey, if I were a top corporate bean counter at IBM I’d tell these marketing boys to keep their damned mouths shut and tell the engineering boys they better have their ducks in a row before they go forward with statements like:

“This new approach to implementing high-k/metal gate will be available to IBM alliance members and their clients in the second half of 2009.”

Failing to realize gains by going to a simpler method (utilizing existing tools for the consortium) by going gate first is one thing. Bragging about the successes and then falling on your ass (along with entire consortium) and then blowing two years of time and money, is just stupid. No matter how you slice it, someone screwed the pooch here.

It seems to me IBM led them down the wrong path two years ago and no one’s talking, and now they’ve got to buy the tools (or perhaps pay INTC for the proprietary process?), anyway. This is evident as everyone is so far behind INTC in process. The INTC replacement tech worked, and it is still working. It’s the difference between industrial pragmatism and industrial pipe dreams.

I’ve got to admit. In spite of the way you’ve put things, you’ve been 100 % spot on about the consortium conundrum. Yeah, the bucks, the tic tock tic tock, the multinational barriers, the individual needs, et al.

Hey, if I were a top corporate bean counter at IBM I’d tell these marketing boys to keep their damned mouths shut and tell the engineering boys they better have their ducks in a row before they go forward with statements like:....

While your comment makes good business sense, it doesn't take IBM's culture into account. Their culture is closer to the "publish or perish" mentality found in academia than the "can I make a billion of them" mentality Intel seems to have.

IBM's culture requires their engineers to get the scoop on the latest and greatest tech. Since nothing is as old as yesterday's news, you have to publish first. This attitude guarantees a lot of flash in the pan publications.

IBM's culture is also why they attract some of the brightest minds in industry. You ignore the fundamental science coming out of IBM research at your peril. Where IBM falls down is in finding a way to put all the pieces together.

While your comment makes good business sense, it doesn't take IBM's culture into account. Their culture is closer to the "publish or perish" mentality found in academia than the "can I make a billion of them" mentality Intel seems to have.

This hits the nail on the head... what you need to publish a paper and what you need for manufacturing can be 2 entirely different things. I have no doubt IBM got the performance they quoted in their papers, fully expected it to scale and then ran into the myriad of integration issues which is why SiO2 was the gate material of choice for over 20 years. Keep in mind SO2 (or SiON) is GROWN from the Si substrate (not deposited) which makes the interface real nice and it is a very stable compound.

Now HfO2 is also stable in some respects (it's used as a heat shield in some applications) but electrically it is a much more complex animal. The deposition also leads to other issues - you have to prep the Si interface and how you do that significantly impacts both how the HfO2 is deposited and the quality of the film.

I suspect IBM would also be having problems with gate last tech (plus there are new integration concerns that don't exist on gate first). While it seems IBM's been working on this for awhile Intel started pre-2000 on this, but just weren't as vocal/publication happy as IBM.

Science for science’s sake is nice and dandy. Today’s challenge is the fine line between research, science and developing something useful from it. Gone are the cushy days where IBM and Watson and Ma Bell and the Labs and Japan had NTT do what we called applied science. Pure science still happens in the universities and consortiums but like the applied stuff is getting more and more expensive, were’ talking hundreds of million if not close to a billion per year. You can’t do it on a NSF grant or if your company can’t make a ton of money from it.

If you think about materials there is the fundamental understanding and than there is the use of practical capability for need. Intel from what I can gather really focuses on the second. In looking back at IEDMs or other technical conferences you don’t find too many papers from INTEL, you do see more these days than say 20 years ago, but that is natural as they are the leaders now versus than.

Specifically for silicon, IBM as much as they’d like to say they are the leaders, simply are not. They don’t invest enough, don’t manufacture enough, nor is it strategically important. They have a legacy in it but beyond that the microelectronics division could go away tomorrow, spun off and IBM bottom line might look better, that isn’t the case for INTEL. Its core to their competitive advantage and leadership.

Without arab money and NY state money I think IBMs day of rallying the silicon consortium in NY is limited. Tick Tock Tick Tock

It's stunning that with all the crap about Atom eating into margins, the gross margins are going up.... perhaps the idiots who confuse low ASP with low margin may start to think about the cost part of the equation when trying to predict margins.

...well actually it's not stunning, if you actually understand what margin means and have half a brain.

Thanks , I have recently been searching for information approximately this subject for ages and yours is the best I have came upon till now.However, what concerning the conclusion? Are you positive in regards to the source?

We absolutely love your blog and find almost all of your post's to be what precisely I'm looking for. Does one offer guest writers to write content in your case? I wouldn't mind creating a post or elaborating on a lot of the subjects you write regarding here. Again, awesome website!

An impressive share! I've just forwarded this onto a friend who has been doing a little research on this. And he actually ordered me dinner because I stumbled upon it for him... lol. So let me reword this.... Thanks for the meal!! But yeah, thanks for spending the time to discuss this issue here on your web page.