11.03.2010

There have been several reports recently on Intel's agreement to build FPGA's for Achronix on their upcoming 22nm technology. As far as I know this is a first for Intel. Not only are they building someone else's designs on an Intel process, but they are building those devices on Intel's leading edge technology.

Intel makes the most money off of their leading edge process. In recent presentations Intel has made a big deal out of how quickly they are ramping their newest process technologies. Faster ramps mean earlier crossover from the old technology to the new technology. Driving towards earlier crossover means higher profit margins and shorter time to repay the development and retooling costs associated with moving to a new process node. So I have to ask: Why would Intel sacrifice any of their early leading edge capacity for what is essentially foundry work?

The articles I've seen have suggested 2 reasons. The first is that Intel is looking to offset some of the R&D costs of process development. The second is that Intel wants to get back into the Field Programable Gate Area (FPGA) game.

In my opinion, the idea that Intel is looking to offset R&D costs with this move is absolute rubbish. Anyone that is willing to take an objective look at this would come to the same conclusion. Let me give an example to demonstrate why I don't think this line of speculation is worth the pixels it takes to print it.

Suppose I can sell a product for $100 and it costs me $50 to make. Let's also say the design work costs me $1000 up front. So if I sell 1000 units, I make $50000 minus the $1000 for design work. I net a total of $49000.

In the foundry model, I save the $1000 design cost up front, I still spend $50000 to make the 1000 units, but then I can't sell them to the customer for $49000 because they want to make a profit as well. Recouping their design costs isn't sufficient. So let's say I can sell them for 70% of their market value. That gives me $35000 in profit.

The model here is grossly oversimplified, but it illustrates the point. Building my own designs I make $49000, and building product as a foundry I make $35000. That means I'm making significantly less on the foundry product, and last I checked, making less isn't going to help offset my development costs. Instead of helping me, it reduces my margins and increases the time it is going to take me to recoup my R&D investment. Remember, we are talking about Intel's leading edge technology here, not trying to fill fabs running and old technology and keep them profitable longer.

The second theory is that Intel wants to get back into the FPGA game. Intel once had an FPGA program and sold it. In the EE Times article a spokesman for Achronix was quoted as saying:

"If Intel wanted to be in the FPGA business they would be already. They certainly have the cash."

And he is right. If all Intel wanted was to be in the FPGA business, they would simply buy Achronix or a similar company.

I believe the author of the EE Times article comes close to explaining what Intel is doing when the author says:

The relationship with Achronix could be a precursor to Intel eventually combining programmable logic with its Atom cores on the same die to create a new type of device. Earlier this year both Xilinx and Actel Corp. announced products that combined their programmable logic technology with hard ARM processor cores.

In my opinion the author of the EE Times article isn't looking far enough ahead to see what Intel is really looking to accomplish. While Intel may well want to create a new device that combines Atom and FPGA circuitry, I believe there is a much larger scope to this announcement. This move is really about Intel's Atom SOC strategy, not just FPGA devices.

In order to be a real player in the SOC space (smartphones, autotainment systems, etc.) Intel needs to develop a robust SOC capability they don't currently have. Up to this point the SOC designs that I've seem Intel previewing are all in-house Intel designs. But many of the players in the SOC space have their own proprietary designs they build around the central processing core. To make that happen, Intel needs to learn how to build external designs on the Intel process.

But my reading leads me to believe that Intel's design rules are fairly restrictive when compared to the traditional foundries. Since we are talking SOC's here Intel can't just tweak the process for an individual customer. The external designs have to work well with the same process Intel is using to manufacture Atom. In order to work effectively with customers in this new space, Intel needs to learn how to work in conjunction with external design teams to get the designs laid out in a way that will take advantage of Intel's process capabilities and yield well.

I believe the Achronix move is actually a first step in Intel's SOC strategy. A strategy that will allow Intel's customers to design their unique features around an Atom core to make a truly unique product. If this strategy proves successful, Intel and their partners will be able to offer a distinct product with clear differentiation in the market place. This is how Intel intends to differentiate future Atom products from competing ARM products.

I tend to agree with the a few of the folks who are saying it's more about the tech involved with Achronix and the foundry aspect is more of a side item. And mixed in the comments in the article is yet another comment on potential issues with high K in the foundry world.

It would be tough for Intel to compete in the foundry business without taking a hit to the gross margins. while it would increase revenue I'm not sure Intel is willing to make that tradeoff. However if they had a substantial technology advantage (not just time to market) that let them sell at higher prices (or lower cost thru significantly better yield say if the rest of the industry was having yield issues) for a sustained period of time, then maybe they would go into the foundry space.

Intel's advantage has been time to market (and volume) on the manufacturing size - while critical on the CPU side, the foundry volume tends to lag and while there may be a small chunk of the market willing to pay a premium for leading edge, the vast majority is probably still 65/45/40nm business which becomes as much of a cost decision as a tech decision.

Wow, about time for a new article, seeing as how we're halfway through Q4 so the Q1 earnings post is a bit stale by now :).

Anyway, very interesting and I tend to agree this is more about Intel positioning themselves for pushing Atom into more spaces, rather than trying to get max usage out of each fab's capacity. But I don't discount that aspect either - occurs to me that the max ROI on fab equipment is when it is 100% utilized. So if you can pump out 40K wafers a month but are only selling 35K of your own stuff, why not get a customer or two for that extra 5K wafers? Or maybe I'm not following the financial analysis in the article too well :P.

Another thought - why is there still a link to Scientia's blog on the front page? Last I heard, he abandoned his blog (no updates for almost a year now), abandoned AMDZone, and who knows what else, in favor of playing World of Warcrack. :P.

Perhaps his demonstration of complete noobiness in overclocking his i5 finally did him in, after years of not being able to predict his way out of a paper bag :). Or perhaps he did succeed in oc'ing his i5 and now is an Intel fanboy!

So if you can pump out 40K wafers a month but are only selling 35K of your own stuff, why not get a customer or two for that extra 5K wafers?

I agree with what you are saying, but the articles all state Intel will be doing this work starting in 2011. Intel will still be ramping 22nm in 2011, so they are sacrificing limited 22nm capacity to do this. It isn't a case of filling factory capacity. It is a case of giving up limited capacity they could produce Ivy Bridge processors on.

AMD Analyst Day presentations are online, and there was one interesting observation.

The old "closing the gap" process technology theory now appears to be dead.

AMD is forecasting 32nm production start (I think they are playing a bit fast and loose with" start" as that could mean add 2-3months for actual shipment) in mid/end Q2 (or Q3 for shipment?) This would put them 18months behind Intel (remember the talk that it was 1 year and closing?)

After that they seem to migrate to the half nodes (20nm instead of 22nm, 14nm instead of 15nm) for future releases... but these are spaced out by slightly more than 2 years. The 20nm node is Q4'13 which if Intel is on time with their 22nm tech, would put them 2 years behind Intel(more or less a full node) as Intel would be migrating to 15nm at around that same timeframe.

AMD is history...IBM is history...GF will be historyall at least in competing on the bleeding edge of technology. What was that I heard from some yahoo idiot from GF/AMD that they were going to close the gap? They annouced 45nm HighK 3 years ago and haven't shipped anything. Pss Intel has shipped hundreds of millions of two generations now and 3rd generation 22nm is coming next year. Maybe first really means last.

Where is Sharikou, where is Scientia, where is all the other fanboys at AMDzone.

The only queston now is ARM or low power ATOM who will win.

Lets see the laws of physics apply to all. One is two generations ahead on process tat equals 4x the transistors and likely 1/2 the power. With a couple spins ARM is going to be history to.

It looks like with all the attention that AMD is getting lately, that 2011 might be the AMD's ass kicking year... and by the little comments left in this site as of lately, you guys are in denials...have fun with the onslaught that is about to come...

I can only assume that Anonymous just above me is being either sarcastic or ironic.

re: Phenom 6-core, I saw that pricing and my jaw dropped. Newegg has the 1090 for as low as $229 if I am not mistaken. You wonder if Intel even notices that AMD even exists now, as they do not seem to be pressured to lower their prices.

In any event, I could upgrade my backup system (2.4GHz core2 quad w/8GB) to a 1090 with 16GB for less than $700. I might just do that before the year is out.

The model here is grossly oversimplified, but it illustrates the point. Building my own designs I make $49000, and building product as a foundry I make $35000. That means I'm making significantly less on the foundry product, and last I checked, making less isn't going to help offset my development costs. Instead of helping me, it reduces my margins and increases the time it is going to take me to recoup my R&D investment. Remember, we are talking about Intel's leading edge technology here, not trying to fill fabs running and old technology and keep them profitable longer.

Don't forget that AMD will play a double sword game. It seems that AMD will license its processor core designs to other Globalfoundries customer while still being a fabless company and at the same is a Globalfoundries stakeholder. Intel have a complete set of design IPs while AMD is betting its future with CPU, GPU and fusion designs and will not play peripherals game like communications chip to be matched with standards like Bluetooth, WiFi, 3G/4G and Wimax.

So in the end, Intel will be vertically integrated while AMD will create different competitors for different market.

“that 2011 might be the AMD's ass kicking year... and by the little comments left in this site as of lately, you guys are in denials...”

Dead wrong flamebait, Ole Sparks here, speaking for myself (and others if I may be so bold), are merely basking in Intels magnificent technological lead in both process and architectural developments over the past three years.

Also,everything predicted on this blog over the past three years has come to fruition.

Remember the words of the ‘GOD of Imitators’, “only real men have FABS”.

That said, not only has AMD “cooked the books” financially during the past 12 plus quarters they’re even “cooking the books” with their current graphics solutions, despite having a commanding lead in that market.

Not that I mind, my TWO 5870’s in CF are fantastic. However, no matter how hard they try the boys at Advanced Mismanaged Designs can’t seem to stop shoveling marketing bullshit, even with a fabulous product lineup.

And you believe their nonsense regrinding logic process at 32nM and lower?

Speaking of Intel getting into the FPGA business, I saw this article in EE Times today.

Notice that the article says that Intel will be putting Altera FPGA devices inside the Intel package. The next logical step would be to move to a true SOC. Perhaps something designed by Achronix in return for a little fab capacity?

These puppies run pretty warm at 2.7-3.6 watts (they are being marketed for embedded systems) but I have to believe that moving to an SOC design on a 22nm Intel process would cut those power numbers down a lot.

I don't want to sprain my arm patting myself on the back, but I recall saying this announcement wasn't about Intel looking for foundry work. :)

Sparks, surely the article you directed me to is in error. AMD is the only honest and truly benevolent semiconductor manufacturer out there. They are looking out for the consumer. They would never deceive their customers by not being completely transparent. I have to believe that the author of the article was referring to Intel, not AMD. (sarcasm intended)

Don't forget that AMD will play a double sword game. It seems that AMD will license its processor core designs to other Globalfoundries customer while still being a fabless company and at the same is a Globalfoundries stakeholder.

Who exactly have they licensed processor IP to?

As to being a GF stakeholder, that stake is dwindling rapidly and AMD said during the last conference call they expect it to continue (it's just a matter of time before they completely sell of their stake). The issue is whenever Abu Dhabi puts more capital into GF (like the NY expansion... at least the part Sparks isn't paying for), AMD has to contribute the percentage or their ownership stake or their ownership stake gets further diluted (which is what is happening). This is probably a good thing as an ownership stake in GF at this point means a share of the losses and is a further drag on AMD's bottom line. (though long term this may be an in issue as no ownership stake = less leverage with the foundry)

A long while back I mentioned this might go down as the biggest IP export out of the US. To put things in perspective the US is careful about even shipping 65 and 45nm equipment to export control companies... AMD has managed not only to effectively export leading edge equipment, but also IBM's process IP. I'm not sure if this would have got approval without the ownership stake shell game with the FTC/other US regulatory agencies.

Well, it may not be great for AMD, but I love the pricing. Got that 3.2GHz 6-core for $230, a nice Asus 890FX board for $165, and four 4GB DRAM sticks for $228 total. Grabbed a Crucial 128GB SSD and a Radeon 5870 with the savings.

If AMD can turn things around, more power to them. But when you are practically dumping your fastest desktop part, you are hurting bad. As for me, I've got a nicely souped-up graphics and gaming workstation for peanuts. Hah!

Note to Flamebait regarding this blogs major players. As you can see they are alive and kicking, to say the least. The two BIG dogs are still keeping a sharp eye on things with EXPERT critical analysis. Even ole’ LEX is still chopping away.

Finally, I do need to issue a correction. Not only are we basking in our predictions concerning Intel’s magnificent turnaround and AMD’s MAJOR F-UPs, TONUS, a pragmatist to the end, is building AMD six core machines on the cheap, and why not?

And I quote, “HAH”!

TONUS, please keep us posted on the AMD six core performance. I’d like to know how it compares to your i7 920. As you know, the SPARKS household is decidedly INTC (an understatement, my children and wife would think I’ve gone mad), but I’d really like to know your take the AMD rig once you get it together, thermals, power consumption, et al.

“Huang is a recipient of the Dr. Morris Chang Exemplary Leadership Award from the Fabless Semiconductor Association in recognition of his exceptional contributions to driving the development, innovation, growth, and long-term opportunities of the fabless semiconductor industry.”

And……………

“Prior to founding NVIDIA, Huang held engineering, marketing, and general management positions at LSI Logic, and was a microprocessor designer at Advanced Micro Devices.”

Interestingly enough, David Kanter has an article on the Intel-Achronix deal:

http://www.realworldtech.com/page.cfm?ArticleID=RWT110810164231

Kanter agrees with ITK that it is pretty unlikely Intel is testing the foundry waters - for one thing, TSMC's high mark recently was something like 50% gross margins - Intel is at 65% and climbing. Second, not only GloFlo but now Samsung has announced they want to broaden their foundry business. So that means low prices and low profits unless one of the 3 has a killer process advantage.

Kanter however does think that Intel may be using the partnership to look at an FPGA manufacturer acquisition. The partnership would give Intel a leg up on merging the technologies.

A third theory is his, is that FPGAs have unique computation abilities that for certain workloads (HPC for example), can exceed the performance of both CPU and GPU. Plus being programmable, they can take over functions such as digital signal processing for wireless (or perhaps realtime speech recognition) and similar tasks, in server and high-performance embedded areas.

Kanter however doesn't think that the high-performance Achronix FPGA strategy will fit well with the Atom in the tablet/smartphone market, at least not at the moment.

The last scenario seems to be Kanter's most likely choice:

"The hypothesis that seems the most likely is that Intel is strategically engaging with companies that complement their current and future product portfolio. The fact that Achronix will have access to QPI seems to imply that it will be paired with Xeon server or embedded processors. There is definitely a niche for FPGA acceleration and Intel already has partners using the older front-side bus. For the future though, QPI is necessary and may be the first step towards on-die integration with a ring (or other topology) interconnect. A tightly coupled FPGA coprocessor is extremely beneficial for high performance embedded applications (e.g. networking and storage) and would also be a good counter to the use of discrete GPUs for HPC. While FPGAs are challenging software targets, they can achieve higher performance in some cases than GPUs and also be used for specialty problems (e.g. deep-packet inspection, cryptography)."

"The bottom line is that Intel is not likely to enter the foundry business and the most plausible explanation is that they are pursuing complementary technologies. More generally, Intel understands that they cannot do everything for everyone (nor do they desire to do so). This is especially true when it comes to the embedded market, which is nearly impossible to describe because of the variety and breadth of applications. There are already examples of Intel working with third parties to create products that can address different embedded niches. This suggests that Intel may pursue other partnerships in the embedded space to move Atom into new markets, which could actually have a far larger impact than the Achronix deal. In some senses, this would be a second attempt to achieve the same goals of Intel’s unsuccessful partnership with TSMC on Atom – empowering the x86 ecosystem with a variety of hard and soft IP from third parties."

Since AMD's Fusion appears to be replacing their FP units with some sort of hybrid FP/GPU for enhancing vector/matrix computations, maybe adding an FPGA to the GPU on Sandy Bridge's ring bus might be Intel's answer..

Tonus, ya just gotta love those 5870 cards. They did two things in fell swoop. They blew away Nvidia’s seemingly endless crowing and bragging while simultaneously laying to rest the $600 to $700 graphic card conundrum we suffered for years. Good riddance.

The 5870’s were so nice I did ‘em twice.

As me for building a new machine, I haven’t been bitten by the bug-------yet. I’ve been romancing some outrageously expensive vacuum tube hardware (DIY of course). Very old school obviously, but it’s something I can understand conceptually and build in my modest lab at the same time. I wish I had GURU’s head for MOSFETS.

Not only is this stuff hideously expensive, it weighs a nearly 50 pounds, and there’s enough lethal voltage in the power supplies to fry every CPU on Long Island ---all at once! Furthermore, 90 dB or lower speakers need not apply; which presents another very expensive and interesting challenge.

I don’t get much flak from the wife for the posh 50 pound space heater as she developed an ear (thank God) for the subtle differences/nuances in recorded material. She swears she can hear Sade’s lips parting as she sings. (And nice lips they are) My wife says it’s like starting over with the same old stuff.

That’s one way for a husband to work some new magic into an aging marriage.

"The only thing keeping us from 4GHz is a lack of competition to be honest. Relying on single-click motherboard auto-overclocking alone, the 2600K is easily at 4.4GHz. For those of you who want more, 4.6-4.8GHz is within reason. All on air, without any exotic cooling."

Tonus: So you're saying I missed out on the $20 hex-core? Or the buy-one-get-two-free Phenom????

LOL - dunno if AMD will sell them that cheap, but yeah I expect they will be dropping prices in the next month or two, for (1) getting rid of inventory for Bulldozer which is supposed to come out in April for the desktop version (according to John Fruehe), and (2) to counteract what looks like an assault on the midrange CPUs by Intel.

What I find particularly interesting are the mobile versions, esp. since I'm in the market for a new laptop. IIRC mobile is a much larger segment now than desktop, which is why AMD mounted tneir own assault on that space with their Fusion branding (CPU, graphics, chipsets, not the integrated GPU stuff). I suspect SB is gonna torpedo that boat pretty quickly, esp. if a laptop can use the on-die GPU for battery life and then switch to discrete GPU for games. Considering that SB has dedicated circuitry for encoding/decoding, I expect many users will not be using any discrete graphics very often. Anandtech's review of a pre-production laptop showed some incredible battery life on a mid-size battery. AMD's only near-term answer seems to be Llano, which has powerful graphics and old K10 architecture, and probably won't be out until Q3 sometime. And Anand thinks that the 22nm shrink Ivy Bridge will likely double the on-die GPU performance of SB, as Intel adds more transistors to it. So if Llano is further delayed and IB released in Q4, they might be direct competitors..

Sure, the QX9770 is decidedly long in the tooth, no doubt. But what a ride it has been for this fabulous piece of hardware. As you know I really don’t like incremental gains in performance, and upgrading for the sake of upgrading, simply to keep up with the Jones’s.

However, if this thing is ANYTHING NEARLY what they’re claiming:

“easily at 4.4GHz. For those of you who want more, 4.6-4.8GHz is within reason. All on air, without any exotic cooling."

Then this is a substantial increase in performance given architectural improvements and unmanageable process/design tweaks. What they’ve done is proved Law Moore’s law alive and well. They’ve doubled the performance since my last purchase. And that’s quite enough to separate me from my money. No performance increase, no money from SPARKS.

These days my beloved INTC has only to compete with itself. But then again this something we all predicted.

So much for the”we need competition” to raise the performance bar theory.Horseshit, I rest my case.

I decided to put down the soldering iron for a moment to see what’s up next on the INTC big bat hit parade. Well, no less than this popped up.

I7 X995, for you tight lipped insiders shoveling silicon at Chipzilla. Yeah, the word’s out and the lunatic fringe is in a tizzy with this ES, and it looks like 2Q ’11.

The link below will give all the non insiders (like me) a peek at the 3.6 gig badass. Aside from the expected top honors, along with the ridiculous domination by INTC hardware, my QX9770 (mentioned in the previous post) has been absolutely put to shame by a factor of two. Now that’s kickass.

Still looks like Moore’s law to me, and guess it’s time. Frankly, I’ve bored to death with all the hoopla surrounding the i3’s and i5’s.

From techeye: An official statement from AMD on the resignation reads: "The board believes we have the opportunity to create increased shareholder value over time. This will require the company to have significant growth, establish market leadership and generate superior financial returns. We believe a change in leadership at this time will accelerate the company's ability to accomplish these objectives."

That seems to be pretty sudden and unexpected. I can't imagine that the board just up and decided during CES that it was time for some new blood. I'm sure the rumor mill is churning at high speed right now.

I hate to speculate but you have to wonder if there was something specific that happened to cause this and "results and direction" is just a nice way of burying it.

If the board felt they needed new blood to drive growth there would probably be more of a transition planned out or a few candidates in mind - that doesn't appear to be the case here. If they were unhappy with results/direction this would be something brought up on more than one occasion and you'd think they'd have a plan B in mind.

According to the NY Times, Mr. Meyer’s push into the portable market wasn’t as effective as it should have been. Even Wreaktor was surprised at the Boards move, and he said so.

My Wall Street buddy tells me that Mr. Meyer no doubt has a nice fluffy parachute for a nice soft landing. Personally, I think he’s earned it during his tenure at AMD, especially during the golden years of 2005 and early 2006. I can’t think of anyone left at the company that was the embodiment of AMD past, the last of old guard perhaps?

I think the board just realized that the real casuality in this ARM/Nvidia vs. Intel battle is really going to be AMD as they don't hold any magic cards anymore. Firing Meyer is not going to fix anything. I think AMD is really dead at this point

Intel Corporation today reported full-year revenue of $43.6 billion, operating income of $15.9 billion, net income of $11.7 billion, and EPS of $2.05 – all records. The company generated approximately $16.7 billion in cash from operations, paid cash dividends of $3.5 billion, and used $1.5 billion to repurchase 70 million shares of common stock.

For the fourth-quarter, Intel posted revenue of $11.5 billion. The company reported fourth-quarter operating income of $4.3 billion, net income of $3.4 billion, and EPS of 59 cents. Fourth-quarter revenue, operating income, net income, and EPS were also all records.

“2010 was the best year in Intel’s history. We believe that 2011 will be even better,” said Paul Otellini, Intel president and CEO.

Also looks like server revenue jumped way, way up. I wonder why AMD wasn't able to make a comeback with Magny Cours?

Coming after a two-year period where they've been fined a billion dollars and paid AMD a billion dollars, that's very impressive. With Intel enjoying another year of record income and profits, how long until the next round of anti-trust investigations?

Frankly, I get the feeling that the governments that have been after Intel are actually glad to see them making even more money. When you have a piggy bank like Intel (easy to demonize, can take from them under the guise of fighting for the little guy, has more money than god, etc) you do not want it to run dry.

Record revenues, record profts and bullish forecasts from the CEO. All seems going well in the house of intel. There are some worrisome clouds and does intel have the right irons in the fire to fix? Lets see as we race to everyone with smartphones we rely more and more on the cloud. To first order that means a ton more sales of high margin serves and makes intel bottom line look good.

But where are all the unit growths, what does everyone lust? Sorry it ain't intel. Isn't it funny that in intel's best quarter most people in the world are lusting something without intel inside.

This is like 20 years ago where where intel ate IBM, Dec and everyone else from the bottom.

Intel board better wake up if Paul doesn't deliver a good tablet or mobile phone chip he should be fired just like Dirk.

Oh Paul, the clock is ticking no matter how good the bankaccount looks!

Well ITK, finally, what I personally have been wait for. A full blown PC with touch screen and a portable full sized key board @ 2 pounds, not some cut down eye candy machine, running trendy little apps. Things call only get better from here.

3 hours doesn’t seem like much, however, it’s plenty long for my commute.

Anony.... I think the server share for AMD was somewhere around 7-8% (which is stunning considering where they were not too long ago)

1P and 2P still make the overwhelming majority of servers and while multicore has increasingly diminishing returns on notebook and desktop space, the one area which tends to utilize core counts effectively is server space (which then cuts further into 4P+ space). Intel was eating massively into this space before QPI thanks to 1P and 2P gains and with Nehalem on, they are eating into all server markets.

I think the board's direction is a realization that Intel owns the high end space (in all segments) and ARM will start squeezing the low end space (possibly across all segments). AMD has been able to cut a niche in this space through slashing prices significantly below Intel... however this won't work with ARM. For those willing to cut corners in performance for price (the "good enough" crowd), ARM will soon be a potentially viable option over AMD.

So the question is where does AMD go? Back into high end space and grind against Intel (which hasn't worked out too well unless all the moons align like they did with K8) or try to spread out in low end and embedded space? It seems the ATI wing of the company (which was rumored to be the ones pushing a more dramatic change in strategy and pushing Dirk out the door) feels the chances are better in the low end. And in my view they are probably right. This is where the unit growth is (even if it is low margin) and there are so many markets it might be easier to make some headway.

I wouldn't be surprised to see the next CEO from a communications related company or perhaps even outside the semiconductor space.

AMD and GlobalFoundries have made it clear that they're committed to gate-first manufacturing through the 28-nm half-node, but as Real World Technologies' David Kanter reports, GlobalFoundries' 20-nm fab process will be a gate-last one.

Seems odd to make a change to gate last when IBM was telling the world that gate first was:- similar/better performance- cheaper- easier for manufacturability

I guess IBM decided to go with the lower performance, a more costly, and more difficult to manufacture process! (probably just to help Intel out as they seem to be struggling with the whole high K thing)

This is exhibit 458....IBM - great with announcements and results on research based solutions, manufacturability - not so much. (see also: SiLK, air gap, EUV, strain....)

According to the earnings conference, AMD is banking on Bulldozer to eat into Intel's server marketshare, starting sometime in the 2nd half of this year. However I recall similar prognostications from AMD management, first with Istanbul and early last year with Magny Cours. Considering that their earnings report shows lower gross margins due to lower ASPs on CPUs, and that revenue was flat, I doubt they ate much if at all into Intel's marketshare.

And even if BD turns out as good as AMD would have us believe, Intel will be answering with Ivy Bridge not too long afterwards..

Was it what 5 years ago INTEL was going to annouce to the world it was going HighK metal Gate.

Not to be outdone IBM assembled a press conference if I recalled to tell the world it was going there to. They were first in annoucment and proudly beat their chest that gate first was superior for many things, and they were first too. They were first with LowK dielectrics, first with strained silicon too for those that remember. If you go even further back they were first for copper, first DUV, first for shallow trench, first for a bunch of other notables. But there is a huge difference now as gone are the days that when IBM does something they actually have to make a few hundred million cost effectively.

Spring forward a few years and a more than 500million chips sold. I can't recall seeing a single other manufacture with HighK metal gate in production beside one company. Looks like the ones that were last are really frist and right. Who in their right mind makes two changes to the two most important elments of the transistor the gate dielectric and the gate material itself. Its a messy business those two things. Its something the industry took a good decade to get right and milked for 30 years. You don't go about changing it after deep thought of what is best and right. Its hard enough than to have to change again a few years later.

Is this “inside the industry” rumor mill, or are there any links to this juicy tidbit?

Not inside the industry info - saw it on a few places on the web (and one was from a generally credible source) - I'll see if I can dig up a link.

In other interesting news, AMD is apparently potentially fabbing their 28nm graphic chips at TSMC and not GLOFO - this was from Fudzilla,so take with the usual huge boulder of salt.

If true though this is rather stunning.... 28nm is supposedly done on bulk Si at GloFo (maybe they are done in a different sense of the word?) If it were in good shape why would AMD use TSMC (they must have better pricing terms with GloFo and I'm sure they have preferential access to capacity)

"28nm is supposedly done on bulk Si at GloFo (maybe they are done in a different sense of the word?) If it were in good shape why would AMD use TSMC (they must have better pricing terms with GloFo and I'm sure they have preferential access to capacity)"

I love your brand of humor.

Does Glow Flow make anything for anyboby besides AMD? In fact, have they made anything?

Well they are currently making AMD chips (not sure about chipsets - I still think those are being done at TSMC)

Graphics is either all TSMC or a mix.

Glow Flow might have some other customer production but I'd imagine it is mostly (all?) 40/45nm work and predominantly the bulk Si process (there aren't a lot of people using SOI). Well, I guess technically they also have all of the Chartered customers after that acquisition too. (but that's probably more 65nm/90nm and maybe even 130nm work)

Here's another part of the problem (besides the fact that they are having a hard time winning the graphics business from the company they were spunoff from): with the process change from gate first on 32&28nm nodes and then a switch to gate last on 22nm that may scare away a few customers for 28/32nm. While I don't think foundry customers do a lot of shrinks for a given product, a change like that would make a shrink much more difficult than a 'simple' scaling of feature size. This will probably be more of an issue for AMD if they are shrinking some of their CPU designs from 32/28nm to 22nm as the gate changes will effect design rules and hence design (again beyond simple scaling).

Who know maybe AMD will fab their CPU's at TSMC too? (That's a joke, as TSMC does not have an SOI process that I know of at the moment)

This is probably a blessing in disguise for the OEM's as it gives them time to clean out the old inventory without having to slash pricing on the older chips/chipsets.

Apparently this was on a late metal layer, which is a lucky break as it means almost all of the inline material can be held and won't have to be scrapped. If it was an early step in the flow that would be a lot of material scrapped.

I'm curious as to how Intel came up with the 700mil-1Bil revenue... it's a revenue hit for this quarter, but the cost hit must be dramatically lower. I also wonder if it's lost revenue or delayed revenue (I assume the Sandy Bridge purchases will go to a mix of older products, AMD products and simply delayed purchases. I wonder what the overall cost and revenue hit will be (my thinking is lower than the forecasted Q1 revnue hit as they probably will gain some in Q2)

Me too, that’s a pretty big number considering it’s early in the product cycle and only one quarter, that’s 300M a month, 10M a day, give or take. Further, those are revenue numbers, not total expenditures to correct the problem(s).

Unless, the problem is in the tooling, (just an off the cuff guess given the numbers).

Perhaps this is a classic case of the disadvantage of “copy exactly” in every FAB?

Given that Intel is shipping integrated CPU/GPU is this an actual share loss or an accounting issue?

In the mobile space, while there is still a significant # of discrete graphics chips (Nvidia/AMD), the attach rate on Intel integrated graphics has to be pretty high (>75%?). As some of this is replace by integrated (CPU/GPU)solutions, that is likely cutting Intel's integrate chip share at a much faster rate than say Nvidia/AMD.

I wonder if this is just a figment of the CPU/GPU integration - eventually the integrated graphic chips will tend toward zero and the graphics market share will be more or less just the discrete market share.

The research said this could point to a slowdown im PC shipments in Q1 (as these chips tend to be a leading indicator and bellweather). While Q1 generally slows down, I wonder if the gradual increase is more a function of an icreasing share of CPU/GPU integrated chips. I think the research group needs to use southbridge chipsets as more of a leading indicator.

HardOCP demo'ed an almost-ready-for-prime-time Radeon 6990 and were impressed. It's the successor to the 5970 (dual GPU, I am assuming it uses 2GB memory) and apparently will have the same impact as the 5970 in terms of performance improvement over the previous card (the 6970, in this case).

No sooner said than done. The ATI moniker has been dumped in lieu of the new AMD branding scheme. Well almost. They wisely kept the Radeon trademark with this new, extraordinarily powerful, AMD Radeon 6990.

What a way to start, and what a monster. Kudos, nice job; be advised to all 5xxx and up card owners, the latest driver are mandatory for trouble free operation. Early drivers were pretty bad; especially if you are mad enough, like yours truly, to run two in Crossfire. Instantaneous reboots were not uncommon in the middle of play.

That said, a remarkable run for what was ATI Radeon 5xxx and 6xxx series products.

As most of you know I’ve always had a difficult time conceptualizing the difference between memory process and Logic process.

For the most part, companies brag about xxnM memory then subsequently fall on their asses when that process is applied to logic.

I called it the ‘memory thing’.

The best explanation (in terms I could understand) was that of a Jet engine and a rocket engine. They produce thrust but they do it in entirely different ways. Thank you, Ortho.

However, it seems my beloved Intel produces both rocket AND jet engines these days with the introduction of new 25nM SSD drives. 25nM, not too shabby, and they’re cheaper, faster, bigger, and for sale. No bragging paper launched demonstrators here.

There’s a nice review at Anandtech for your pleasure. The 300G drive for 530 bucks is definitely in a sweet spot.

Anybody notice that in AMD's conference call, Siefert admitted that AMD's server marketshare was just 6.6%? Seems peculiar that Magny Cours offering 12 cores at a price cheaper than 6-core Xeons couldn't get them more mindshare and marketshare after a whole year..

It looks like Intel will be making an announcement of some sort this Wednesday. Intel says this is the “its most significant technology announcement of the year.” Based on that, I'm going with the rumor that they will be unveiling their 22nm process, though it is still months away from hitting the market.

I'm also going to go on record as saying that Charlie at Semi-accurate is all wet in his announcement that Intel is using trigate (finfet) transistors in the cache only. If Intel is going trigate, it will be all or nothing. Intel is smart enough not to try and complicate the process by trying to combine planar and 3-D transistors on the same die.

I saw that too. I did some poking around and it seems his report on Intel’s announcement is fairly accurate. However being a noob I didn’t think, nor did he explain (as you did), there were technical caveats to the planar/finfet process/architecture.

In any event, the good news is INTC is at 22nM while the rest of the industry can’t find their collective way to 32nM logic.

(Don’t look now but Charlie will be demonstrating his technical prowess on Group III-V metals!)

Looks like a lot of companies have left the BAPCO group, including NVIDIA and VIA. It will be interesting to see if it has any impact on the industry. I have wondered in the past about how much influence such broad benchmark applications have. How reliable is a benchmark suite run by a consortium that invites the very companies it is testing to help build it?

“we have to either start adding more interconnect layers to meet the needs of the high-performance processors”

Ortho was hammering this home two years ago. Or was it three?

“we have to start using more double-patterning layers to get down to the tight pitches and tight design rules.”

Hell, Guru was chanting this mantra with religious fervor for nearly four.

“We’re still very cognizant and sensitive to keeping wafer costs low, but our processors—which range from high-performance to low-power Atom SoCs”

ITK, you win this one. I believe this was your favorite analytical pique.

Finally, this one goes to all, especially G:

We have had our hopes on EUV for some time, but luckily we’ve also pursued immersion and double patterning in parallel. That has come up better than most of us dreamed possible. It’s not that EUV isn’t there yet, but double patterning with immersion has really delivered."

SemiMD: So you can get by with double patterning and immersion for the foreseeable future?

Bohr: Yes.

Why do I get the feeling “luckily” had nothing to do with it?

Man, talk about being clairvoyant in the long term. I get the feeling I established a casual repartee with some very special VIP’s (to say the least) over the years, and definitely those In The Know.

As an ancillary comment to my previous above, I thought I might mention the other most predicted dynamics by the learned members of this site, my own pet peeve subject, graphics.

Sure I have two very powerful, if not too powerful cards in my machine. As was so politely pointed out to me by the very forgiving members here, Intel’s “good enough” solution has been far more effective than Larribee could ever hope to be.

As NVDA and AMD beat each other to death with expensive high end solutions, INTC quietly gained nearly 20 percent in graphics market share over the two.

I may have agree with Fuddie that stand alone graphic card companies may go by the way of the Dodo bird. Factor in gaming with Wii’s and Play Stations, who would be crazy enough to spend a $1000 on dual card PC solutions?

Thank You for being the member of this website. Please allow me to have the possibility to show my satisfaction with Hostgator web hosting. They have professional and instant support and they also offering some [url=http://ceskeforum.com/viewtopic.php?f=67&t=721 ]Host gator coupons[/url].

I'm right here to assist get your website off to a fantastic commence. Don't be one of these that wished they had done it better the very first time, then get on the task, and cost, of starting above. Your web site reflects what you as well as your organization or hobby are all about.

I'm ready to supply low-cost, user-friendly, custom made created internet sites to get a vast range of firms, organizations and groups.It also shouldn't cost a fortune to obtain you started. I am going to aid guide you through the method as well as warn you after i believe you might be finding in more than your head, or beyond your spending budget.

Listed here are some hight high quality services for any resonable cost:

Bulldozer reviews are out. I admit that I haven't been following the hobby that closely for a while, and so I wasn't aware that there would be reviews up today.

It seems pretty disappointing, in that it seems that AMD remains stuck in a rut. The newest CPU is a good deal for the price, but you wonder if AMD is tired of using "best bang for the buck" as a way of admitting that the other guy has a faster product.

My impression is that BD won't force Intel to cut prices or alter its roadmap. I'm sure the hard-core fans will find a benchmark or two with the right tweaks that will give them some hope (or conversely, they'll comfort themselves by insisting that the posted benchmarks are purposely biased). But the new processors don't excite me at all.

Whoops, I take that back. Doing more reading and it seems as if it would be better to get a prior-gen hex-core if you want to get an AMD CPU. This looks like a PR disaster for AMD.

I'm seeing attempted justification along the lines of "this CPU will do much better on future software" or arguments to that effect. But that doesn't really help AMD. In effect, they are saying "you'll really love this CPU... IN 2017!!!"

Yeah, even the AMDZone fanbois are pretty disappointed in BD - 5 years in the making, no faster than a Thuban, huge die with 2BN transistors (wonder what the extra ones do, seeing that Thuban has 6 full cores and only 1.1BN). Also sucks power down like the original Fermi :P.

Baron Matrix (aka Bulldozer Baron aka Baghdad Bob reincarnated) over on Tom's Hardware (which for some peculiar reason saw fit to give him limited moderator ability to delete and even ban posters) is off galloping around the web looking for excuses as to why BD is so underwhelming. Well, good luck with that..

“My impression is that BD won't force Intel to cut prices or alter its roadmap.”

It’s been a long time since I considered INTC’s roadmap is seriously affected by anything Advanced Micro Designs is doing. In fact, do they factor in at all?

I’m willing to wager INTC is keeping a slot open in the market for them to remain in DOC’s so called “vegetative state”.

Actually, AMD has found/been given its traditional niche in low end laptops and low end corporate bread boxes. Perhaps this is a good thing since they don’t need to layout billions for FABs requiring cutting edge tech/production.

Hey, they can just bite the bullet and have INTC build chips for them!

It’s been a long time since I considered INTC’s roadmap is seriously affected by anything Advanced Micro Designs is doing. In fact, do they factor in at all?

I don't think that they have for some time now, which is what makes this release so baffling to me. Unless they've got an ace up their sleeves, this looks like a pretty big embarrassment. I just don't see a market segment where BD will have any impact on Intel at all.

Could it be that AMD felt that they had to push this thing out of the door and hope that the next time is the charm? Because based on the benchmarks I saw, BD is competing with the Intel 2500 and AMD's own previous-gen hex core, and occasionally competing with the i7 2600. I guess if you spend all day compressing files, you'll be in seventh heaven...

It’s fairly obvious that INTC’s monumental lead has taken its toll on the number of posts on this site. After all, what more is there to say?

However, Since AMD’s latest release (mentioned above) is out, I thought a preview of INTC’s latest and greatest i7 3960X may be in order. Also, as a bonus, a new socket and a new chipset review are thrown in for good measure.

Interestingly, they’re going for quad channel memory to help with bandwidth, impressive.

Further, if anyone had any ideas that INTC is resting on their laurels, think again. The new Sandybridge chip is making the i7 990X look almost anemic, no joke!

The boys at INTC are pumping gold, not silicon. A 26% increase in most areas, higher in some! While AMD founders, INTC only get smaller and faster, physics be damned.

Well personally I'm gonna wait for Ivy Bridge, which I hope will be available soon after CES in January. Intel says they are producing in volume in December. But I'll wait and get a 7-series 1155 mobo instead of a 6-series.

Lots of rumors going around as to why AMD bombed with Bulldozer, including one from a supposed disgruntled former engineer who said since the buyout of ATI, AMD's design engineers use cookie-cutter SoC automated design instead of hand-tuning at least the critical speed paths. Dunno if true or not. Also L2 cache issues - way too high a latency. Not sure what can be done to fix this in the next stepping.

No worries Nonny, just remember that no matter how slow everything else might be, it all evens out when you compress a few files!

I see that Intel quietly released the Core i7 2700 and 2700K at a very slightly increased price point (like $15 more than the 2600). It appears to be just a 100MHz speed increase over the 2600. It's kind of odd, it's almost as if they wanted to rub AMD's nose in it. "What will we do to counter AMD's new chip? How about a token release of a slightly faster CPU? LOLOL!!!"

Ah well. My hex-core Phenom is still chugging along, though the Radeon 5870 has been replaced by a 3GB GT580 card. Nice card, good performance.

It's not my main PC anymore, though, as I have finally gone mobile. I've had laptops before (my Sony VAIO is still a solid performer) but this is the first one that is a legit desktop replacement (at least in my world!). It's this one from MSI. Works like a charm so far.

Mobile, for me, typically means that I can take it from my desk in one room and onto a desk or table on another (or a small table on the balcony, when the weather is good).

But yeah, I'm hauling this to work everyday just because it's new and I am still putting it through its paces. Since I'm on an exercise kick the last several months, this is just another form of cardio!

"“Reducing our cost structure and focusing our global workforce on key growth opportunities will strengthen AMD’s competitiveness and allow us to aggressively pursue a balanced set of strategic activities designed to accelerate future growth,”

LEX, you just may be right.

However, for everyone else, a more in depth analysis is at the link below. It ain’t pretty.

Hmm, in addition to the layoffs, Rory Read is also supposed to announce a 'new direction' for AMD tomorrow. Wonder if that means abandoning or delaying desktop? Obviously those disappointed AMD fans who waited and waited on Bulldozer, and who are now waiting on Piledriver, might get disappointed once again if AMD pulls the rug out from under them yet one more time.

According to the latest MaximumPC magazine, the next two iterations will be code-named Steamroller and Excavator. While it's easy to recognize the pattern and the first three names are pretty good (IMO), the last one makes me cringe. It's only an internal development name, though. Based on what has happened the last few years, the official name could be "Two-Little-Too-Late-Athon."

MaximumPC benchmarked the Bulldozer 8150, and if AMD fans were unhappy with reviews on the web, they'll be apoplectic over this one, as it finished behind Intel's best in every test. The Intel CPUs are beyond the price point of the 8150, but they don't call it MAXIMUM PC for nothing. They want power, and lots of it, and the 8150 is a striking disappointment to them.

Yes, Ars Technica has compiled a bunch of Bulldozer benchmarks on the server end. It ain’t pretty.

The key word here is “catastrophe”, theirs, not mine. Intel set out to recapture the server side of things a few years back with Nehalem, and according to all reports, they have.

Interestingly, Fudzilla (no less) wrote, “Although Opterons did manage to offer superior performance against comparable Xeons in a TPC-C scenario, they end up costing about 50 percent more, yet deliver an 18 percent improvement in performance.” With that I would advise particular emphasis on the word “comparable”.

One can only imagine what position AMD will be in when Ivy Bridge and Tri gate architecture is released. Should I guess 2Q ’12?

Bought ATI and assumed a pile of debt when instead they could or should have invested in CPU design and process technology and grown MS. Everybody wanted to buy AMD as they were the little guy.. but no no hector ruinAMD had other ideas and where is he and AMD these days, finished

What is even more interesting is GF is in big problems too. Where is the GF CEO, gone, who there really knows how to develop and manage technlogy, a bunch of arab swimming in black gold that flowed out of their toilets. You think they have any clue to how to select and manage people to manage technology.

This is good read: http://www.tomshardware.com/news/amd-globalfoundries-28nm-apu-tsmc,14073.html

I wonder who think that TSMC with such a diverse set of customers can compete and develop bleeding edge 3d finfets without a huge volume revenue runner to recoup the investments?

Tick tock tick tock the clock is now ticking on ARM and the host of foundries.

Lex found a few interesting read. I'll be sharing with the Doctor too, ROFL

" McGregor said that should the 28-nm Globalfoundries made APUs really be scrapped, it would cause a “world of hurt” for AMD, which would be left a generation behind its competition for most of 2012, making AMD even less significant in the market."

IBM is great at putting together a pilot line, but scaling has never been a strength. They probably had pretty much one tool at most of the critical process steps dialed in (as that is often enough to handle pilot line/development capacity) and had no idea how much of a razor's edge they were running that process on.

Transfer that to your partners who now have to do this on multiple tools and on varying products and that razor thin margin means you have a real hard time scaling it up.

You really can't decouple design and manufacturing at the leading edge anynore (unless you are prepared to sacrfice some performance to give you more margin) - this I always thought was the biggest hidden risk to AMD spinning off the fabs and it looks to be the issue now.

Going forward I would not be stunned to see AMD operating more and more at the n-1 node at the foundries (a half node or full node off of the foundries leading edge) to avoid these issues.

HardOCP reviewed the AMD Radeon 7970 and it performed very well. It's far ahead of the previous generation 6970 and showed a 10-20% improvement against an OC'ed GTX580.

The reviewer seemed concerned that it may not have been impressive enough to handle NVIDIA's next generation card, but that for now it's the fastest card on the block. I think that a 10-20% improvement against an overclocked 580 is pretty damned good! Especially when you consider that it's a much quieter and more power efficient card.

This also puts pressure on NVIDIA to make sure that its next card is a beast of an upgrade. I don't expect to upgrade my 3GB 580 anytime soon, but it's nice to know that when I finally do, whatever I get will be a pretty big upgrade.

PS- Sparks, speaking of upgrades, I dismantled my 1090T system and replaced the motherboard, CPU, and RAM. I was looking at the new i7 processors, but the 2600K is such a good deal right now...

I got a Corsair 800D tower, which is enormous. It's housing a 950W Corsair PS powering an Asus z68 board and a 2600K with 32GB RAM (cackle). It's running the stock HSF so I've kept the speeds at normal for now. Perhaps later next year I'll get a decent air cooler and see how hard I can push it. Might even try to get it to 4GHz with the stock cooler for kicks, though it is not lacking for performance right now!

Mainly, I'm wondering just how big of a photoshop file I can create before it begins to slow down (maniacal laughter).

Intel still doesn't offer an "acceptable" graphics experience on their processors while AMD has the advantage in this area (like it or not). It's just a matter of time for AMD to beat Intel once again in the CPU realm (performance wise). maybe they will or maybe not, only God knows...

Intel does indeed offer an acceptable graphics solution with their processors. It would not surprise me if Intel sells as many (or even more) graphics chips than either AMD or NVIDIA. Built-in graphics still make up the majority of the video options for computers these days, and unless a significant percentage of those buyers are making video upgrades (my guess? They're not) then the Intel onboard graphics are enough for them.

And I don't think it's a simple matter of time for AMD. They've had ample time to come up with a viable solution ever since Intel pulled ahead with Core2, and so far they have been long on promises and short on delivery. Their best hope is that they can find another NexGen or Alpha that has a superior design that they can graft onto their next CPU.

Barring that, they need money and the right management for their development teams. Of late, they seem to be sorely lacking in both.

Intel still doesn't offer an "acceptable" graphics experience on their processors while AMD has the advantage in this area (like it or not). It's just a matter of time for AMD to beat Intel once again in the CPU realm (performance wise).

IIRC Rory Read is supposed to announce AMD's 'new direction' soon, and de-emphasize competing with Intel at least on the desktop. In which case, yes the war is over as AMD has given up, decamped & gone home to Momma.

As for graphics, it's only a matter of time before Intel catches up and surpasses both AMD and NV, at least for integrated GPUs. Rumor has it that Haswell will use silicon interposer technology to stack low-power DDR3 memory directly on top of the GPU, meaning a manyfold increase in bandwidth and reduction in latency. Some Intel slides from a year ago showed a 7X increase in GPU performance over Sandy Bridge's (whether HD2K or 3K, I dunno). So that could mean Intel's IGP could surpass that of Trinity..

IIRC Rory Read is supposed to announce AMD's 'new direction' soon, and de-emphasize competing with Intel at least on the desktop. In which case, yes the war is over as AMD has given up, decamped & gone home to Momma.

According to AMD's Analyst Day, Pile Driver, Steam Roller and Excavator are clear signs that AMD won't be exiting the performance desktop market without puting a good fight.

Intel might have the process advantage, but it takes more than that to make a really good processor, and AMD has proven it to them several times (maybe not with these last iterations of processors)

“Intel still doesn't offer an "acceptable" graphics experience on their processors while AMD has the advantage in this area (like it or not). It's just a matter of time for AMD to beat Intel once again in the CPU realm (performance wise). maybe they will or maybe not, only God knows...”

This has got to be flame bait. Either that, or you’re simply delusional. I’m not God, but I can read. Intel has surpassed the entire industry in process and architecture by nearly TWO, 2, II, generations, a deadly combination.

This has got to be flame bait. Either that, or you’re simply delusional. I’m not God, but I can read. Intel has surpassed the entire industry in process and architecture by nearly TWO, 2, II, generations, a deadly combination.

Take it however you want, but once again, let me repeat myself: process advantage alone is not guarantee for a good processor. In the end, micro-architecture rules it all

Intel today spelled out in more exacting detail just what the Ivy Bridge chip delay means in the wake of comments published Sunday from an Intel executive.

"Reports of an eight-week delay to the Ivy Bridge launch are inaccurate and our schedule has only been impacted by a few weeks," spokesman Jon Carvill told CNET today.

So, for instance, if a desktop Ivy Bridge product was slated for an April launch, that would be pushed to May. And a mobile product scheduled for May, would launch in June. Intel always staggers production schedules. For example, Intel's most power-efficient ULV (ultra-low-voltage) parts typically ship later than other (e.g., desktop quad-core) parts.

On Sunday, Sean Maloney, executive vice-president of Intel and chairman of Intel China, told the Financial Times that Ivy Bridge chips would be delayed until June.

Carvill added that once the Ivy Bridge chips launch, Intel will bring up production faster than the current Sandy Bridge chip.

"We expect to ship over 50 percent more volume of Ivy Bridge units to the market in the first two quarters of production in 2012 as compared to Sandy Bridge [in the same time frame last year]," he said.

The chips most affected by the delay are the ULV products, according to people at Intel familiar with the delay. ULV chips are in high demand for ultrabooks.

"There was real high demand for ULV and we weren't going to have the volume we needed for that with respect to the original launch timeline," one Intel person familiar with the circumstances surrounding the delay said.

Sean Maloney speaks? wasn't he the heir apparent till he got a stroke and couldn't speak than got sent to China wonder why? Before the stroke I'd listen, after who knows what the poor guy is thinking or saying.

If intel is slipping Ivy who cares. Right now they more interesting thing is they got tri-gate up and running and with the next spin of the Atom low power design on this superior transistor they will move even if not ahead of ARM on power. Every manufacturing of phones has got to be thinking what is my competitive angle with everyone else buying the same chip from the same manufacture using the same frigging OS. Even the one phone maker who designs his own chip but fabs on someone else fab has got to be wondering.. is this PowerPC, SPARC, PA-RISK vesus x86 all over again?

"Take it however you want, but once again, let me repeat myself: process advantage alone is not guarantee for a good processor. In the end, micro-architecture rules it all"

Heh, now that AMD appears to have dropped hopelessly behind on process, it no longer matters because micro-architecture is everything. Well then, perhaps the AMD fan can tell us just how far ahead of everyone else AMD is on micro-architecture?

Now the only question is will INTELs lead in HighK/Metal and tri-gate give them enough time to spin Atom two more times. If history is any lesson the answer is yes. Anyone wonder why Buffet is loading up on INTC?

Changing architecture isn't too har, couple years after decision just execute the design. Trying to get highK metal gate working when you choose badly like gatefirst is terrible.

Do we need to go into the mistakes by the other side; SILK, bi-axle strain, SOI, gate first, it goes on and one. They can't getanything right!

Heh, now that AMD appears to have dropped hopelessly behind on process, it no longer matters because micro-architecture is everything. Well then, perhaps the AMD fan can tell us just how far ahead of everyone else AMD is on micro-architecture?

LOL. The kettle calling the pot black. ;)Once again, uArch rules it all. Doesn't matter if AMD is behind as of now, but future offerings...That's all I'm saying. ;)

It looks like AMD is starting the rumored acquisition spree with the impending purchase of SeaMicro. According to the Wall Street Journal (no link because it is paywalled and inaccessible, apologies), and confirmed by SemiAccurate sources, the server chip maker is acquiring the server maker.

As you might recall, SeaMicro makes the most innovative, dense, and power efficient servers out there for certain markets. Up until very recently, these boxes used Intel Atom chips, but recently added Xeons to the mix. Either way, SeaMicro packs a ton of cores in to a small space with unmatched efficiency. If the problem is suited to a shared-nothing cluster, it is hard to do better.

What was until last week a showcase for Intel technology and server direction, is now an AMD property. It now signals AMD’s direction on future server technology, and plays perfectly in the spaces where AMD was the strongest. For the very near future.

In any case, this is a win/win for AMD and SeaMicro. AMD gets the technology and a foot in the door to the fastest growing server market out there, and SeaMicro investors get a lot of cash. Unfortunately, SeaMicro workers may have to drive another exit up the 101 to get to work, but that is far from the end of the world. Intel however just got their baby stolen from the crib, and that has got to hurt.S|A

The short story is that it completes the AMD clean sweep of all the next gen consoles.

Yes, you heard that right, multiple sources have been telling SemiAccurate for some time that AMD won not just the GPU as many are suggesting, but the CPU as well. Sony will almost assuredly use an x86 CPU for the PS4, and after Cell in the PS3, can you really blame them? While this may point to a very Fusion/Llano-like architecture we hear that is only the beginning...

I guess the AMD-haters will skip the PS4 after all. Am I right? Ohh wait, you can always go with Wii-U or Xbox Next... Ohh Shh**t, those use AMD GPUs as well.

Anon: In any case, this is a win/win for AMD and SeaMicro. AMD gets the technology and a foot in the door to the fastest growing server market out there, and SeaMicro investors get a lot of cash.

Considering how soundly the SB-EP Xeons trounce Interlagos in the various articles around the web today (Anandtech, Tom's, etc), I wouldn't be surprised to see AMD continuing to use Xeons in their high end servers and maybe continue with Atoms in the low end, esp. with the 22nm node.

Looks like NVIDIA is out for blood. HardOCP tested the GTX 680 and in general it is faster than the Radeon 7970, though either card seems like overkill for single monitor setups. I was impressed that the 680 was as good as the 7970 on triple monitor setups, though. And with an MSRP that is $50 lower, they're looking to make up for lost time.

The real battle looks as though it will be in the second or third tier of cards, since the top level does not seem practical for anyone running 2560 x 1600 or lower resolutions.

I find it kind of odd that every time a flamebait weasels his or her way to this site and ejaculates some ridiculous comment about the death of INTC, be it graphics, process, or architecture, something pops up to turn the minions in the other direction.

Well, it’s not them exactly. It’s Intel performing yet once again.

Of course, I must give ITK credit here when Larrabee was put on the back burner and he assured me it was far from being dead. It seems INTC has been burning the midnight oil with graphics.

“In the end, the massive bandwidth, coupled with the 5x increase in shader performance, will mean Haswell is a real graphics monster.”

I believe the hand writing is on the wall when it comes to mainstream stand alone graphics cards.

Hmm, I've heard from a pretty good authority (i.e., Intel employee) that Intel is selling as many IVBs as they can crank out. The repositioning of the HD4K GPU into a larger number of the lower parts as requested by Apple is what caused the big delay in mobile (now June IIRC) whereas DT is only delayed 3 weeks.

However AMD is supposed to release Trinity on May 15, so it seems they finally caught on to the fact that you don't let your competitor run loose for the better part of a year with no comparable product. Will be interesting to see the benchies comparing Trinity to IVB..

Moose, I read the same thing (nearly) at Charlie D.’s site. However, I trust your source more.

Apple, obviously, carries big clout with INTC, and I say why not, as long as the stay exclusive, of course. They get the newest and fastest parts before anyone.

Have you been near an Apple store at your local Mall lately? It’s like they’re giving away the trendy little gadgets, as if cost is not a factor, while they clamor over each other to get a peek at the newest sensation.

One little caveat though, the malware and viruses we PC users have learn to live with for the past 20 years is starting to hit home for Apple users. I guess the Apple OS has finally achieved ‘big target’ status.

Because they're not much of a problem. Ivy Bridge doesn't overclock as well as Sandy Bridge, at least at the present time.

At worst, a few hobbyists will defer purchasing a new CPU until the 'problem' is resolved. More likely, they'll buy a Sandy Bridge chip instead. Which means that the Ivy Bridge "heat problem" will cause Intel to lose a sale to Intel. I'll bet that's killing them.

You know things are going bad for AMD when a fanboy thinks that this is what passes for a problem.

I'm baaack. Been swamped with work and school. Not a great combo for quality of life.

Anyway, regarding this

Changing architecture isn't too har, couple years after decision just execute the design. Trying to get highK metal gate working when you choose badly like gatefirst is terrible.

You completely miss the boat if you are referring to Intel. Because they are an IDM, you don't just spin the architecture, you link it to the process. That is becoming Intel's biggest advantage. Their process yields well because design and manufacturing are coupled.

Working on some thoughts regarding Intel's process nodes vis-a-vis the Atom vs Arm competition. Hope to have something up in the next week or two. While I have a bit of time now, it still takes a while to dig up relevant references and cross check for accuracy.

Have you looked at the performance vs power curves for 22nm on Intel's website(see slides 10 and 11 of Krazanich's powerpoint on intel's 2012 investors day)? I doubt it. If you did, and understood what you were looking at, you wouldn't be surprised.

The slides show gate delay vs voltage. Speed vs. power if you prefer. Note that switching speed does not decrease as fast as operating voltage increases for the 22nm process when compared with the 32nm process. I believe it is safe to assume the gap continues to close until the 32nm process actually shows better power/performance values than 22nm (assuming you stay below a critical voltage and don't fry your transistor).

Bottom line, Intel's 22nm process was designed for maximum power efficiency at low voltages. So higher temps on a overclocked part shouldn't come as a real surprise to anyone who takes a moment to think about what they are looking at. Energy efficient overclocking (an oxymoron anyway) was not the target for Intel's 22nm process. Improved performance at low voltage was the target and they hit that very well, thank you.

What a deal. Make a snarky comment, get an education. Come back again soon, won't you?

“Energy efficient overclocking (an oxymoron anyway) was not the target for Intel's 22nm process. Improved performance at low voltage was the target and they hit that very well, thank you.”

Oh how true.

With the power, speed, and efficiency of today’s processors why bother? Hell, the damned things are throttling themselves right out of the factory on air with little to no user intervention! Hello?! Further, the software folks are really taking advantage of the additional cores we’ve been blessed with.

This old time overclocker, for one, has put away the pumps, the radiators, and thrown out the algae preventer.

And basically, those days are over, unless of course, you’re trying to get an AMD chip to perform (nearly as well) as an equivalent INTC chip. Then by all means buy plenty of liquid Nitrogen, you’re going to need it.

I was an avid overclocker back in the pre-Pentium 4 days. Back then, a seemingly small boost in clock speed could deliver significant performance improvement at a considerable cost savings. I can remember OC'ing an AMD 486 chip from 120MHz to 160MHz. It was a pain in the ass having to configure dip switches on a motherboard, but the payoff was worth it.

And boosting a Pentium chip from 133 to 166MHz or 166 to 200MHz might not seem like much, but the price differences could be pretty big at the time. You really were getting extra performance and saving some money. Not long after that were the glory days of the Pentium II and III and Celeron chips that would OC from 300 to 450 (or even 500+)MHz. You were still getting a good boost, but RAM and hard drives and video cards were having a greater impact on performance at a lower price point.

These days I don't OC very often. Usually just to see the fancy numbers on the screen. But performance... eh. It's nice to know that Photoshop filters are being applied in 3.2 seconds instead of 3.22 seconds, but it's no longer such a big difference. If I was still dabbling in 3D I guess I'd get more out of overclocking. And I can run my video games at 1920x1200 with every last option cranked up to 11 without any problem.

It's kind of sad in that I don't get that thrill from overclocking anymore. But at the same time, it's nice that I don't have to. I am paying less for hardware that does more than I could even dream of back in those days.

Regarding this IVB “overheating problem”, I found this article which explains in detail what’s going on and how far you need to go to get there. Be advised, they’re talking in the 40 to 50% range. (Hell, as retired over clocker, getting 20% stable, 24/7, was Valhalla!) More importantly, transistor density does play a large part, and what densities they are!!!

“Focusing on wattage, rather than temperature, paints a clearer picture of how Ivy Bridge’s increased thermal density plays out in real life. Focusing on the chip’s thermal paste obscures the larger trends. With bus-based overclocking having largely gone the way of the dodo and AMD unable to offer an enthusiast challenge to Intel, the days of buying a low-end chip and ramping the clock 30-50% to compensate are well and truly gone. Intel’s desktop products are now largely differentiated by core count, Hyper-Threading, and cache sizes rather than clock speed.”

The article reaches same conclusion ITK did, from the technical side, and my conclusion from the less technical side (LN2).In any event, it’s what I’ve been saying for years, just buy the best and fastest Intel has to offer, live in bliss, and be done with it. After all, what’s couple of hundred bucks spread out over the course of two or three years?

Sparks said "just buy the best and fastest Intel has to offer, live in bliss, and be done with it."

Heh, did exactly that - bought an i7-3770K, Asus P8Z77-V-Deluxe mobo, 16 gigs of Corsair Vengence 1866 CL9 memory, SSD, Raptor 600MB HD, etc etc and put it together last week. The build went pretty fast - had more trouble with the silly Win7 Pro "upgrade" insisting on my installing a prior OS such as Vista, before it would proceed and overwrite it (which I did not want to do on my pristine SSD, but I finally gave up and did it the M$ way. Used to be all you had to do was stick the prior OS install disc in the optical drive, but not now). Still setting it up so no time to bench, but it seems much faster than my 5-yr-old C2Q system that it replaces..

“The problem for ARM vendors such as Nvidia, Qualcomm and Texas Instruments is that while they are fighting among themselves, Intel's considerable clout with device makers coupled to an Android operating system that it will have an almost unilateral say in optimising, could leave them with a stunted brand of Android, with no hope of getting an x86 license from Intel.” (Predicted by LEX)

I came across this juicy item at the INQ. It seems INTC has developed their own x86 OS and positioning themselves to be market leaders in smart phones. (Predicted by ITK) To quote: “This is how Intel intends to differentiate future Atom products from competing ARM products.”

Naturally, INTC WILL NOT license ATOM architecture to anyone, a sound decision from the board to this shareholder. I’m guessing INTC learned something after AMD’s (The Imitator) lawsuit. As expected (from this website anyway) AMD will not (can’t?) get in to the smart phone fray choosing something a bit larger, like tablets.

I’m also guessing that process and architecture is so closely bound it would be too much of a giveaway for INTC. If I’m wrong, I can live with it; give them NOTHING (Predicted by me)

Does anyone remember the time when overclockers were the lunatic fringe of the enthusiast set? Can you recall the time when the suits at INTC made the word “overclocking” verboten in the hallowed halls of Santa Clara?

My, have times changed. The suits are in jeans and are giving out $35 “Performance Tuning Protection”.

In other words, overclocking insurance. Ain’t that a kick in the ass! No more sleepless nights about ‘GURU’s’ horror stories about electron tunneling through substrates and insulator layers. No more sweats about ‘Orthos’ “breakdown of copper interconnects in the “backend”.

Just pay 35 bucks, clock away, and get a new chip straight out of the oven!

It just goes to show you how far we’ve come and how confident they are about their products.

Moose-pay the 25 bucks (for a 3770K) and have a blast, buddy! Your chip qualifies!

""Does anyone remember the time when overclockers were the lunatic fringe of the enthusiast set? Can you recall the time when the suits at INTC made the word “overclocking” verboten in the hallowed halls of Santa Clara?

My, have times changed. The suits are in jeans and are giving out $35 “Performance Tuning Protection”.

In other words, overclocking insurance. Ain’t that a kick in the ass! No more sleepless nights about ‘GURU’s’ horror stories about electron tunneling through substrates and insulator layers. No more sweats about ‘Orthos’ “breakdown of copper interconnects in the “backend”.

Just pay 35 bucks, clock away, and get a new chip straight out of the oven!

It just goes to show you how far we’ve come and how confident they are about their products.

Moose-pay the 25 bucks (for a 3770K) and have a blast, buddy! Your chip qualifies!