Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

MrSeb writes "When Intel goes looking for new chip manufacturing technology to invest in, the company doesn't play for pennies. Chipzilla has announced a major investment and partial purchase of lithography equipment developer ASML. Intel has agreed to invest €829 million (~$1B USD) in ASML's R&D programs for EUV and 450mm wafer deployment, to purchase €1.7B worth of ASML shares ($2.1B USD, or roughly 10% of the total shares available) and to invest general R&D funds totaling €3.3B (~$4.1B USD). The goal is to bring 450mm wafer technology and extreme ultraviolet lithography (EUVL) within reach despite the challenges facing both deployments. Moving to 450mm wafers is a transition Intel and TSMC have backed for years, while smaller foundries (including GlobalFoundries, UMC, and Chartered, when it existed as a separate entity) have dug in their heels against the shift — mostly because the shift costs an insane amount of money. It's effectively impossible to retrofit 300mm equipment for 450mm wafers, which makes shifting from one to the other extremely expensive. EUVL is a technology that's been percolating in the background for years, but the deployment time frame has slipped steadily outwards as problems stubbornly refused to roll over and solve themselves. Basically, this investment is a signal from Intel that it intends to push its technological advantage over TSMC, GloFo, UMC, and Samsung, even further."

Yes, actual relevant news about improvements in technology. And it's seeing way less comments (and traffic, one assumes) than the more recent article about San Francisco not buying Apple products. It's a sad state of affairs.

In your mind, that is. Today more people recognize that sustainability has to be part and parcel of high tech, or else it is merely "so-called high-tech".

People like to complain and argue. What's to complain or argue about here? The only negative I could think of is the potential money loss an investor would have if they had money invested in one of the competing fabs.

This is an ominous sign of things to come. Intel already has significant advantages [ieee.org] in the foundry business. These could be leveraged further to give its x86 chips a boost vis-a-vis ARM. The other players need to pull their act together & pool resources to counter this. If there is no level-playing field because the foundries can't keep up we could well be facing an x86 monopoly in the low-power chip market too.

I'm a HUGE Intel fan. Really, I am. But they have a rather serious Windows dependency they need to be quit of before they're ready to take on ARM, no matter what their process technology is, nor how fabulous their fabs. They are in serious danger of losing the plot.

Some eight years ago a laptop and desktop came to have the capabilities almost anybody needs. The innovation should have turned on that day to making the thing thinner, lighter and smaller; to making it run all day - but it didn't. Instead Windows became more bloated (as it always has) to drive new product sales for Intel and GPU vendors to make ever more powerful systems to give us more beautiful chrome. That worked for a while. It was great for sales and margins back in the day.

And then Apple came and reminded us that the purpose of the widget and the OS is not to sell more OS and more widget. It's to serve people in ever-evolving ways. To enable and empower people to do what they want to do, and get out of the way the rest of the time. To connect us to the things and people we care about. They came out with the iPhone, and then the iPad. They gave us what we had long craved.

Right about seven years ago ARM systems became "good enough" to do this and Apple released the iPod Touch - an innovative product that struck a chord with us. In 2007 came the iPhone. In 2008 Android. Ever since 2007 Intel has fiddled while Rome burned, producing "mobile" chips that burn multiple watts.

In 2005 the talk was about "the next billion users". It was always obvious that the next billion users wouldn't have watts. Well, Apple and Google have found that next billion users even faster than predicted. They're (we're) mobile. Between Android and iOS, they've sold nearly a billion devices - by the end of this year they'll get there - and now by ignoring the needs and wants of people Intel is in for a hell of a fight.

Even now their Windows pal is abandoning them, developing a new version of Windows without the chrome that requires their power-hungry CPUs, slimming it down to the point where a 7-year-old system is more than adequate and pricing it at a spot that's going to give legs to legacy systems and also building ARM-based systems under their own brand. That's going to kill new unit sales in every possible way for Intel. They had a good stretch where they got to milk that special relationship, but it's over now and then need to think about what next to do.

There are trust issues here that are very delicate. Buyers are not going to want to buy gear that leads back to the Bad Old Way where progress was slow.

I hope Intel figures this out. Really I do. But in the meantime I'm going to buy the kids, and Mom, Nexus 7 tabs for Christmas. My youngest son is almost old enough to teach how to build Android apps.

Intel is already out with a x86 phone that runs Android, is it not? Battery life and performance seems to be competitive with ARM offerings and I read talks about roughly 75% compatibility with existing Market apps. The thing about being Chipzilla is you can get blindsided by change and still come out on top, with a superior product, by throwing more money at R&D than the sum of all your competitors' assets is worth.

It is a single core CPU, which is not very good power draw, and just about competitive with the old crop of ARM devices in single thread performance (throughput on multi threaded work will be worse). Several vendors have the Cortex A15 based devices coming out very soon, which will really make the atom eat shit.

Its GPU performance is not good.

And it's not even ARM compatible, and is far more restrictive in terms of what a manufacturer may do with it. In order for a newcomer to make

It's "slow" only when running emulated arm code. Slow meaning it's a little slower than the fastest ARM SOC's on the market. Look at the performance of native Intel binaries. Look at how the sun spider javascript benchmark is about 10x faster than the fastest arm based device.

The dirty secret is that ARM cpus are not actually very fast.. They're just the only modern cpus that operate in the very lean power sipping envelope that smart phones demand. They seem remarkable, but I doubt they're actually all that

Do you like the way they abused their monopoly and forced AMD out of the market when AMD were very much superior? Or do you like the way they got off with a paltry $1.5bn fine? ($1.5bn offset against the profits and keeping AMD from getting a better foothold and holding up their R&D seems like a very good deal to me).

Their Core i7 chips are excellent (fastest per-thread by a wide margin) as are their graphics. By excellent, I mean just works out of the box and very

Well, they were confronted by the FTC, and did settle w/ AMD and all their other competitors.

But this above story is a great reason to be a fan of Intel. Is it Intel's fault that AMD never figured out how to run its fabs, and went into a totally fabless model? Yeah, other chipmakers (like DEC at one time) did have their own fabs, and pretty good ones @ that, which due to larger mismanagement @ the company, they had to get rid of. How does one explain that HP, even in the late 90s, was fabbing the PA-RISC from Intel?

Intel, by contrast, had a fine manufacturing model from the beginning, and has built on it. Each of their fabs are exactly cloned fabs, due to which the process variations that one sees b/w different fabs of the same company is something one doesn't see @ Intel. It spends its money really wisely in being @ the cutting edge of foundry technology, putting it years ahead of its rivals.

I'm not a fan of the x86 architecture, but in this area, Intel is a victim of its own past success. While RISC was definitely superior, x86, being embraced like it was by Windows and its software, quickly became the standard. Intel itself had at least 3 unsuccessful home grown attempts to replace it - the i960, the i860 and the Itanium, not to mention its StrongArm acquisition as well as the fact that they could have embraced PA-RISC or even Alpha (after Compaq sold all IP to them), but the problem is that x86 was so well entrenched that when AMD took the somewhat counter-intuitive, yet simple, strategy of just extending x86 to 64 bits and were out way ahead of Intel, they looked like eating Intel's lunch. In fact, that was what ended the Intel-AMD wars for good w/ the cross-licensing agreement, and since then, if AMD had fallen behind, that's due to a combination of their own simplistic design paradigms and operational shortcomings. Incidentally, the only thing I was disgusted by was an unproven and inadequate microprocessor like the Itanium 1 being the cause of the deaths of the PA-RISC and the Alpha - both far superior CPUs - but for that, it's HP and Compaq that are to be blamed, not Intel. Intel never asked either of these companies to kill their RISC platforms - in fact, it was happily fabbing at least one of them.

I would have loved for there to have been another company to be like Intel and challenge it. I would have loved to see a RISC processor (other than ARM) dislodge the x86. But I'm not going to hate Intel for the fact that neither of these happened - it's not Intel's job to fall on its face so that others can compete.

Well, they were confronted by the FTC, and did settle w/ AMD and all their other competitors.

They were. They got off with a 1.5bn settlement. I personally don't believe for a moment that the settlement came even close to covering the crime.

But this above story is a great reason to be a fan of Intel. Is it Intel's fault that AMD never figured out how to run its fabs, and went into a totally fabless model?

Quite possibly. AMD were stomping all over intel in the P4 era. The Opteron and Athlon were vastly superior products in almost every benchmark and cheaper to boot. However Intel maintained an 80% dominance through criminal activities.

During that time, Intel got to dump that ill-gotten profit into R&D to develop better chips and fabs, and AMD didn't. If AMD had got that enormous amount of money at that time, then history would have been very different. They would almost certainly ahve better fabs and better chips, for a start. Perhaps not quite as good as intel on the fab front, but much closer.

So yes, it probably is intels fault. That is why monopoly abuse is exceptionally damagind and why I think it's a traversty that intel got off scott free. Actually more so. It was almost certainly a net benefit to Intel wven with the 1.5bn settlement.

it's not Intel's job to fall on its face so that others can compete.

That's not the problem. The problem was illegal kickbacks] and bribes.

Yes, Intel did bad and illegal stuff. The P4 generation was inferior due to the twin blunders of netburst and RAMBUS. However, Intel maintained its process advantage, and AMD sold all the chips it could make (so that even if Intel hadn't used illegal and immoral techniques, there wouldn't have been much of a change in the results.) When Intel reversed its architectural blunders, AMD couldn't even match Intel's architecture, let alone make up for its process technology lag.

It would seem to me that AMD's lagging Intel in architecture (once Intel reversed its policy of prioritizing on the Itanium) was at least partially a result of their being a generation behind Intel in the process technology. As a result of that, if Intel could offer 4 cores in a CPU, AMD could offer just 3, since they were a full generation behind. Generally, the design engineering and process engineering have to be synergistic in order to work, and it's ideal if one is capable of covering any shortcoming

but then there's things that it enables that are just nice to have, like better graphics creation, better games, better vr-applications, faster video encoding, video at higher rates, higher quality image editing faster..

I'd hate to have the innovation on need for speed to disappear, because frankly you can already buy thin & light machines that do communications duties but are shit for general computing. you could always buy them, even a decade ago, ev

The next challenge is to clamp the current performance levels, and drop the power consumption. Some of it will happen due to shrinks, since the internal transisters will want VDDs far less than 1.8V (but not below 0.7V). As the VDD drops, the power consumption should drop by a squared factor. Once power consumption is low enough that a single AAA can support 34-48 hours of operation, CPUs would be as close to perfect as possible.

You've got that the wrong way around. Microsoft has a dependency on Intel! Last time I checked, Linux runs just fine on Intel processors, and that combination powers a big chunk of the web. Some of the most important network appliances are BSD based, and run on Intel processors. Intel processors are also commonly used in systems like SAN and NAS arrays.

Some eight years ago a laptop and desktop came to have the capabilities almost anybody needs.

Citation needed.

I heard the exact same quote when I purchased my 20Mhz 486 SX back in the day.

For one, a typical desktop PC from 2004 probably can't play back 1080p HD video without GPU acceleration.

The innovation should have turned on that day to making the thing thinner, lighter and smaller; to making it run all day - but it didn't.

That's exactly what happened. You have to realize that "performance" and "battery life" are interchangeable. Increased performance at the top end allows underclocked low-voltage processors that still perform OK but draw a fraction of the power. Most of the last decade of transistor development has been about operations-per-watt. Either you get more operations per second at 100W, or it lets you stay at a constant level of operations per second while reducing watts.

The laptop that practically re-defined what it means to be light-weight and thin is the Apple MacBook Air, which is... wait for it... Intel based.

Instead Windows became more bloated (as it always has) to drive new product sales for Intel and GPU vendors to make ever more powerful systems to give us more beautiful chrome. That worked for a while. It was great for sales and margins back in the day.

On the contrary. While Windows comes with a larger installer package these days, that's mostly frameworks and drivers that aren't actually in use most of the time. Both Windows 7 and Windows 8 can outperform Windows XP on the same hardware in many cases!

You have to understand that the kernel is still pretty much the same thing, except that later versions have finer-grained locks, smarter schedulers, and revised driver models that allow more parallelism. None of this is "bloat". For example, "win32k.sys" on my Windows 7 SP1 64-bit operating system is just 3 MB in size. The closest comparison is Windows 2003/XP 64-bit, which has a 4.5 MB kernel. Hence, if anything, it's been shrinking!

They came out with the iPhone, and then the iPad. They gave us what we had long craved.

Walled gardens that don't even have a use accessible filesystem. Now, don't get me wrong, I have an iPhone and an iPad, but you're going to have to pry my PC from my cold dead hands.

The iPhone is great to have in my pocket, but I'm never going to sit at my desk pecking away at that thing when I could use a PC instead.

Right about seven years ago ARM systems became "good enough" to do this and Apple released the iPod Touch - an innovative product that struck a chord with us. In 2007 came the iPhone. In 2008 Android. Ever since 2007 Intel has fiddled while Rome burned, producing "mobile" chips that burn multiple watts.

What enabled ARM to do that is not some magic non-Intel or non-Windows approach, but reduced transistor sizes. Intel has been reducing transistor sizes too, and they're far better at it than the competition. The reason that Intel hasn't previously concentrated on the embedded market is not because they don't have the technology -- they do -- but because they saw it as a low-profit market that wasn't worth their trouble when they can be selling chips in the server market for $2,000 each. ARM's board would probably sell some of their limbs (hah!) for that market, which is why you've been seeing so many articles on Slashdot recently about ARM making inroads into the server space.

Quite the interesting comment there. Nicely done. I don't get quite so many from the 'softy fans as I used to back in the day. Maybe I should try to be more polite.

Some eight years ago a laptop and desktop came to have the capabilities almost anybody needs.

Citation needed.

It was plenty. Eight years ago was 2004. We were on Dothan by then, and the last Prescott, which means PCIe and SATA to erase the bandwidth bottleneck. Two years later was even better as we got a major leap just then. These were (are!) killer chips for software that isn't utter crud given a decent GPU. XP on these is still a great experi

Well, Intel tried to replace the x86 platform w/ Itanium, but despite being that much bigger than AMD, they couldn't. The reason x86 is not an issue on mobile is that mobile doesn't have the legacy software baggage that the desktop has.

These could be leveraged further to give its x86 chips a boost vis-a-vis ARM. The other players need to pull their act together & pool resources to counter this.

Not necessarily. Once ASML has developed these technologies, they will be sold to all customers on equal terms. Moreover, unlike normal shareholders, Intel will not have voting rights and can therefore not easily influence the strategy of ASML. ASML's only obligation is that the R&D investment is allocated to development of said technologies. Other ASML customers (Intel competitors) are welcome to take a share in ASML on similar terms, so similar announcements from the competition may come during the next few months. You may want to read the official press releases [asml.com].

You may be interested to know that ASML has 82% of the lithography market (by revenue), with equipment installed at most if not all manufacturers of CPUs and flash/DRAM memory. The semiconductor industry is driven by Moore's law; in a way, they are dependent on how fast ASML can develop equipment to produce ever smaller features. The interest of the ASML customers in this customer co-investment program is not so much in a competive advantage against each other, but rather to keep up with Moore's law.

Disclaimer: I work for ASML (in R&D), but the views above are my own, etc.

While interesting from a technical standpoint, it's just more of the same from a business standpoint. Intel's main advantage for the last 20+ years has been being consistently one generation ahead of all competitors in process technology. This let them survive the times (e.g. P4) when their designed turned out to be inferior to the competition, by either ramping up clock speed or using higher yields to lower prices.

The other players largely do pool their resources. This is why AMD no longer has fabs.

Usually, the first few years of a fab are its most expensive, when it has to both operate @ capacity and be profitable. Essentially, using the ASML and other technologies, Intel would either build new fabs or upgrade some of their existing ones to 45cm technology. Here, they'd get twice as much die as they got on an equivalent 30cm wafer on the same lithography node (e.g. 22nm), but if one factors in that there might be a die shrink involved as well, make that even more. Translated into units, Intel woul

As another poster [slashdot.org] mentioned, Intel is not the only one who wants 450 mm wafers. A big part of the cost of wafer processing is proportional to the number of wafers and not on the surface area; that's why a transition to 450 mm will lead to cost reduction. This cost aspect actually doesn't apply for ASML's lithography tools (or so I believe), since the tool throughput (wafers per hour) is roughly inversely proportional to the wafer surface area. T

The companies that supply the costly manufacturing equipment to computer chip factories – also known as “tool” makers – are waiting to get “greater clarity” about how much they will be asked to pay for the industry’s transition to using 450 millimeter silicon wafers.

The Times Union reported Tuesday that the tool makers will be asked to foot $450 million of the $1 billion price tag for the first phase of a 450mm transition program that will take place at the University at Albany’s College of Nanoscale Science and Engineering.

Deborah Geiger, a spokeswoman with SEMI, the San Jose, Calif. trade group that represents the tool makers, said the organization is hosting a forum on April 4 at the NanoCollege that will touch on the issue of how the 450mm program will be structured.

“We are not aware that definitive details and amounts have been established and publicly communicated,” Geiger said. “SEMI members are interested in greater clarity around the program structure and funding, including the cost share scenarios.”

The details included in the Times Union story were included in documents used by the Empire State Development Corp. in its approval of $300 million in funding for the NanoCollege for the 450mm program and another IBM program to shrink chip features nearly in half, down to 14 nanometers.

New York state is providing $150 million in cash and $50 million in cheap power, for $300 million total, toward both programs, which will be located inside the college’s new $365 million NanoFab Xtension building under construction on Washington Avenue Extension.

Five leading chip companies that make up what’s known as the Global 450mm Wafer Development and Deployment Consortium – Intel, IBM, GlobalFoundries, Samsung and TSMC – will each contribute $75 million over five years toward the 450mm program.

Geiger says that a meeting is expected to be held in May in which suppliers to the G450C will be provided with a “more complete communication” on the 450mm program and how involved the tool makers will be.

Computer chips are currently made on wafers that are 300mm, or smaller. But the move to 450mm would save incredible amounts of money for manufacturers since output would roughly double with the larger size wafers.

Since the wafers are growing larger, unlike the process sizes, which are shrinking, they should use larger units. Previously, they'd use 0.45 microns, but when it shrank considerably more, they started using nanometers to represent the process size. Conversely, previously, they'd call the wafer sizes 200mm or 300mm, but now, since there is the potential of confusion between 0.45 micron process size and 0.45 meter diameter of a wafer, since the former will be 450nm and the latter will be 450mm, they should

You are off by a factor of ten. Process sizes are currently in the 22 - 32 nm range. 0.45 micron (450 nm) would have been close to the state of the art almost twenty years ago.http://en.wikipedia.org/wiki/22_nanometer [wikipedia.org]