NOTE: LPP (in the text below) stands for "Laser Produced Plasma", Cymer has a description with pictures here.

Quote:

Simply put, an LPP source fires a laser at droplets of tin. The tin responds by radiating in the desired EUV range. The art comes in supplying a reliable stream of tin droplets and getting as much out of that stream as possible.

There are a number of aspects of LPP that Cymer says contributes advantages when it comes to efficiency. Probably the most straightforward one is the “collector.” It’s one thing to generate the EUV radiation; it’s quite another to gather it up and deliver it to the wafer. The structure that does this gathering is called, logically enough, the collector.

For a setup like LPP, you can well imagine that you have a droplet that glows, giving off radiation spherically. So you can create a collector that more or less surrounds the droplet, capturing a big chunk of the emitted radiation

Quote:

One of the challenges of LPP has been the stream of droplets, many of which might not even be hit by the laser. Timed too close together, one droplet being hit might impact the reaction of the next droplet. Cymer says that they’ve spaced the droplets out further so that every one of them gets hit without interference. They run at the rate of 50-60 kHz, which translates to tens of thousands of droplets hit per exposure field.

Quote:

When tin is blasted, well, it tends to go places. Like the collector. This creates a concern about maintenance, since tin debris will cloud the clarity of the collector, reducing its effectiveness. And yet no one wants to have to take the machine down frequently to clean it up.

One of the approaches Cymer uses to minimize this is to have hydrogen in the chamber. The tin vapor reacts with this to create tin hydride, which is volatile. They can pump this out, reducing (although not eliminating) the amount of tin that ends up depositing itself elsewhere.

All of this was fine, but there was one more issue keeping the full energy in a droplet from being exploited. The droplets are 30 µm across, and yet, due to diffusion limits, the laser beam is 100 µm wide. Three times as big as the droplet, meaning that a large amount of the energy in the laser is wasted.

What they found is that they could apply a laser pre-pulse, which seemed to me like a red-eye reduction flash in a camera. This pre-pulse, of less energy than the main pulse, would puff up the droplet so that its size was more like that of the laser beam. So now much more of the laser beam is actually interacting with tin, making the EUV generation much more efficient.

Quote:

But, as a reminder, all of this gets us only half-way to the goal of 100 W. More is needed, which takes us to the other part of their announcement, where they blow past 100 in the lab. There are three components to their getting much higher power. One is simply generating more CO2 power – a more powerful “laser.” The second involves improving the collector. Collector design seemed to come up a lot in the conversation; there’s a lot of focus (so to speak) on making sure that, after you work so hard to generate the EUV radiation, it doesn’t leak away without getting shipped to the wafer.

Finally, having developed this pre-pulse technology, they plan to improve the power output by refining that technology.

Does anyone know if 193nm immersion exposure time is the same for each pass needed? (ie, Will the exposure time of Quadruple patterning in 2016 for Intel take twice as long as the exposure time for double patterning in 2014 etc.)

It's not the exposure time that matters, but yes it is basically the same for each pass, though the pattern won't be.

However, quadruple patterning is not as simple as exposing the same wafer four times in a row. In between each exposure you have to do multiple other processes (spacer deposition, chemical/thermal freeze or reactive ion etching depending on which multiple exposure technique you use). Extra patterning steps means higher costs.

Also, you have to keep in mind that with each step you have to go through, there will always be some defects added. So if you have to pattern the same layer four times rather than two, you will inevitably have a lower yield. Thus you end up putting in twice as much work, and getting less working product out.

In contrast, the light sources for 193nm immersion are much more modest in their power outputs.....yet the quoted wafer throughput in post #22 is quite high. Maybe less light source power is needed due to the longer wavelength's lower absorption as it travels to wafer?

That is correct.

The fundamental difference between EUV and DUV, is that there are many material that have excellent transparency for DUV, but no material at all that have good transparency for EUV. That's why EUV has to use mirrors instead of lenses, and it's why the inside of the scanner has to be near vacuum conditions. But even with these precautions, the vast majority of the EUV light is lost before it ever reaches the wafer.

EUV and DUV resists require roughly the same dose, so in order to get that, EUV lasers must be far more powerful.

Also, you have to keep in mind that with each step you have to go through, there will always be some defects added. So if you have to pattern the same layer four times rather than two, you will inevitably have a lower yield.

I didn't think about that.....thank you for pointing this out!

So just to keep the defects per wafer the same on the new node the Scanner would need to have double the accuracy/precision as the scanner used on the old node.

Quote:

Originally Posted by khon

Thus you end up putting in twice as much work, and getting less working product out.

I noticed in the below link prices per wafer are really jumping (from 25% increase per node to 60%+) starting at 20nm

Prior to 20nm, you can see that wafer prices trended to approximately a 25% per node increase. Starting at 20nm, that increase jumps to around 60% per node. Most of this increase comes from the extra process steps required to do double masking and etching, as well as the double patterning software the fabs will have to use in mask preparation.

Furthermore, according to the quote below (from the linked article), costs on the design side will also increase due to the use of multi-patterning.

Quote:

However, what really shines the light on “share and share alike” is that at 20nm, designers will also be required to purchase new double patterning software, and do additional work in the design layout and verification to enable the actual double patterning processes in the fab. Like the earlier manufacturing tools, the double pattern checking and decomposition capability requires a whole new software engine under the hood to properly analyze the layout. But unlike the earlier layout issues, double patterning violations can be much more pervasive, and fixing them is mandatory, not just recommended.

Combine these increased costs (Fab and extra design side software/work) with the possibility of reduced yields and I can see why Fabs would want to reduce the number of extra patterns needed per wafer.

(1) When it comes to anything pertaining to prices, alway be wary of the date of the pricing source in question - this one is nearly 18 months old.

(2) When it comes to anything pertaining to prices on process nodes that are not yet in production, always be wary of the reality of over-hyped pricing that never enters into reality but makes for great sexy articles in the trade journals. Think about Intel's tray pricing for 1000k units...no one ever pays Intel that price in reality, they always pay something much less.

(3) On the matter of that specific graph, something is afoot because 65nm foundry wafers cost $2.1k/wfr all the way back in 2007 when they were at their priciest...I imagine they are now down to ~$1300/wfr, if not even lower...so I'd question the numerical accuracy (and hence relevance) of the other numbers in the graph as well.

When it comes to forecasting price per wafer on future nodes the graph above is kind of like the once and forever doomsday "brick wall of escalating costs" that get bandied about every node but then fails to materialize in reality once the contracts are finalized and the node goes into production.

Where wafer prices really start rising is when you include IC design costs into the per-wafer cost (so not the cost to manufacture the wafer, but the cost to the fabless company of designing the IC for the wafers plus the cost of producing the wafers) when the fabless company in question is simultaneously looking at low-volume runs of their IC. In those scenarios the price-per-wafer blows up at smaller nodes not for the manufacturing costs per se but for the mask-set cost which is a fixed expense (more or less) much like the fixed expense of designing the IC itself.

Some of these companies must have old info on their websites. I find it hard to believe that the reflectivity of EUV is ~70%. 70% was state of the art for EUV/Near X-Ray wavelengths about 15 years ago! Though, there must be some reason that EUV light sources are too dim to allow high speed exposures. Maybe what was state of the art in the late 90's is just becoming affordable

Some of these companies must have old info on their websites. I find it hard to believe that the reflectivity of EUV is ~70%. 70% was state of the art for EUV/Near X-Ray wavelengths about 15 years ago!

It is interesting that the power of the light source is a gating factor.

According to this article Cymer is able to get the power (At the IF-intermediate focus) up to 160 watts with a larger 28KW laser and pre-pulse in the lab.....but they just can't sustain it.

Quote:

At the SPIE conference, Cymer, the most advanced of the EUV source developers, reported a 50 W average power at IF, with a duty cycle of 80%. Clearly, that’s still some way short of ASML’s target, but the San Diego company does believe it is now on track to deliver HVM sources, adding that in tests with a higher-power (28 kW) CO2 laser and the additional “pre-pulse” laser it has shown a maximum 160 W output at a low duty cycle.

Quote:

But Wennink also suggested that the focus on output power was no longer the company’s main concern. “Wattage is not the issue,” he said. “The issue is how can you make [the source] more reliable. The tool needs to work 75% of the time, and our focus today is to get reliability up – not so much the power.”

Not sure about all the factors holding back improvements in duty cycle at high wattage, But I do see "tin droplet debris" (produced after the laser hits the tin droplet mentioned) several times in the following quotes from the above linked article.

Quote:

This is the laser-produced plasma (LPP) method. Imagine a microscopic clay-pigeon shoot, only one that fires out thousands of clays per second, each of which must be hit, twice, in its exact center, and whose debris has to shower out in exactly the same pattern every time, and you get the general idea.

Quote:

Where both companies have encountered major difficulties is in the tin droplet generator. Alibrandi says that droplet instability has been the major bottleneck for Gigaphoton, describing this as the source’s “nemesis”.

“It’s all about hitting your target,” he said. But getting the generator’s nozzle to behave reliably, so that the tin target appears in the same place each time and is easier to hit, has been a key engineering problem. The droplets of tin that emerge from the generator's nozzle measure just 20 µm in size. The smaller size (compared with Cymer’s design) helps with debris mitigation, but also means that any tiny specks of debris in the nozzle will send the droplets veering away from their intended path.

Now if we look back to the third quote from post # 26 of this thread, Cymer has mentioned tin droplet debris can reduce Clarity and thus efficiency of the collector. More tin debris on collector.....less power at Intermediate focus (where power is measured for these light sources.)

Quote:

When tin is blasted, well, it tends to go places. Like the collector. This creates a concern about maintenance, since tin debris will cloud the clarity of the collector, reducing its effectiveness. And yet no one wants to have to take the machine down frequently to clean it up.

One of the approaches Cymer uses to minimize this is to have hydrogen in the chamber. The tin vapor reacts with this to create tin hydride, which is volatile. They can pump this out, reducing (although not eliminating) the amount of tin that ends up depositing itself elsewhere.

Therefore it appears to me one factor holding back duty cycle times at high wattage involves getting the laser and pre-pulse more accurately pointed at the tin droplet.

More accurate and consistent placement of the pre-pulse and laser on the tin droplet *should* result in a more predictable and controlled tin debris scattering.

Less debris accumulating on collector = more EUV collected at IF (where the power level of these light sources is measured) for a longer period of time. (ie, higher duty cycle time)

Some of these companies must have old info on their websites. I find it hard to believe that the reflectivity of EUV is ~70%. 70% was state of the art for EUV/Near X-Ray wavelengths about 15 years ago! Though, there must be some reason that EUV light sources are too dim to allow high speed exposures. Maybe what was state of the art in the late 90's is just becoming affordable

Once 13.5nm EUVL reaches its performance limits at 13.5nm, there is also potential for continued scaling with further wavelength reduction. Mirror technology for 6.7-6.8nm is feasible using Mo-B4C or La-B4C multilayers, with theoretical reflectivity as high as 80%. Sources using existing Laser Produced Plasma (LPP) architecture can be extended using new target materials such as gadolinium or terbium instead of tin.

According to the above article 80% reflectivity (for 6.7nm EUV) is theoretically possible.

Quote:

Current EUVL exposure tools use a lens NA of 0.25, with plans to increase to 0.33 on next generation tools. Designs for even higher NA have been completed. However, the choice of lens design involves a trade-off between slightly reduced imaging performance, due to the partially obscured pupil in a 6-mirror configuration, and full-field imaging but reduced lens transmission in an 8-mirror design. Since each EUV mirror has a maximum reflectivity of ~70%, every pair of mirrors added to the projection lens (or illuminator) results in a transmission loss of about a factor of 2.Currently, increasing NA, if this results in reduced lens transmission, is not the best scaling path due to the challenges of increasing source power to maintain productivity. There are other factors that also place increasing demand on higher source power, primarily the need for higher resist doses than initially targeted in order to deliver required resolution and process control. Instead, the initial path to EUVL extendibility is likely to be well known RET approaches such as off-axis illumination, and later double patterning or phase shift masks.

Although the 80% is mentioned for 6.7nm EUV (and very well may be specific to 6.7nm) , I would think having greater reflectivity should also help with 13.5nm EUV power transmission in situations where a greater numbers of mirrors are needed for the "projection lens" (ie, getting full field imaging at a higher Numerical apeture). Like the quote above mentions increasing mirrors (with 70% reflectivity) from six to eight reduces EUV power transmission to the wafer by half.

Either that, or EUV source power (measured at the IF) would have to double for a 8 mirror projection lens high resolution set-up.

According to this article ASML does have plans for higher NA (using 8 mirrors for full field imaging) for 13.5nm EUV wavelength allowing a resolution of 11nm for the tool.

See the 6/8 Mirror set-up "under study" (shaded in blue) for >.40 NA allowing a resolution of 11nm for the 13.5nm wavelength EUV and a resolution <8nm for the 6.7nm wavelength EUV.

So the biggest obstacle for EUVL seems to be the droplet generator. That makes sense for it sure does seem to be an intrinsically difficult process. Thanks for all the research you done BN!

Looking through some EUV source research (as a lay person trying to find his way through this extremely complicated area) it appears computer simulation might be a way to speed the development of EUV source.

Hide AbstractThe comprehensive HEIGHTS package was further upgraded and used to analyze LPP sources in full 3-D geometry of 10-50 ?m tin droplet targets, as single droplets as well as distributed microdroplets with equivalent mass, to study mass dependence, laser parameters efficiency, atomic and ionic debris generation, and to optimize EUV radiation as well as prediction of the damage to optical collection system from energetic debris and requirements for mitigating systems to reduce debris. The debris effect on the mirror collection system was analyzed using our 3D ITMC-DYN Monte Carlo code. Modeling results were benchmarked against recent experimental data.

A new center for materials under extreme environments (CMUXE) is established to
benchmark HEIGHTS models for LPP source production and the study of debris effects on mirror reflectivity. Recent experimental results for LPP and the CE agree well with HEIGHTS models.

But how long will such research (such as this and others) take to put into production?

Simulation is extensively being used as a tool to increase the production capacity. Simulation software used by Cymer Inc. (leading producer of laser illumination sources), increased the production capacity from 5 units/month at the beginning of 1999 to 45/month at the end of 1999, an increase by around 400% [5].

Visualization and graphics have undoubtedly made a huge impact on all simulation companies. Easy-to-use modeling has resulted in low-priced packages that would have been unthinkable just a few years ago. The Simulation technology has shot up in value to other related industries. The Simulation industry is coming of age and is no longer just the domain of academics.

Intel has really been aggressively using what they refer to as Computational Lithography for mask optimization and it naturally extends into the EUV regime.

There are some really impressive images out there (let me know if you aren't having any luck finding them) comparing the image fidelity that is enabled with computational litho versus traditional mask optimization.

According to the "Why Computational Lithography" PDF the above four images (going from top to bottom) are four steps in a particular type of source optimization.

Speaking of source optimization, I found the following two examples of Computational lithography (Source Mask Optimization vs Optical proximity correction) interesting:

Notice in the last image (directly above) labelled "Source-Mask Optimzation (SMO)" it is mentioned the single exposure mask is optimized for the illumination pattern (aka the complex optimized illumination source) rather than being constrained to the target shape topology.

Now compare this to the image labelled "Traditional Computational Lithography applied to 22nm". Notice it says "double exposure masks modified by Optical Proximity correction". This computational lithography uses a much more simple dipole illumination source pattern.

The major difference (to me) appears that the more complex "Source and mask optimized" computational lithography is "co-optimized" to a greater degree compared to the OPC (mask level) Computational lithography with its simpler light illumination pattern.

Even going all the way to the 0.5um (500nm) node, when we built fabs back then we would drive steel beams all the way through the ground surface until we hit bedrock.

When the Kobe earthquake struck Japan in 1995, our MIHO fabs in Japan got the living daylights shook out of them and afterwards we found their internal calibration targets had shifted by more than 6 inches (back then, typical miscalibration skew was on the order of 25-50um on a very bad day, to be off by inches was simply unheard of)

Wow..thanks for the info! Very fascinating. I always wondered what type of special precations TSMC fabs have to protect themselves against the frequent earthquakes in TW. I guess the fab buildings are designed to compensate for certain magnitude of earthquake before the tremors botches the machines?

Wow..thanks for the info! Very fascinating. I always wondered what type of special precations TSMC fabs have to protect themselves against the frequent earthquakes in TW. I guess the fab buildings are designed to compensate for certain magnitude of earthquake before the tremors botches the machines?

They do, think about the hard-drive in your laptop. The heads and the arms auto-park themselves on the fly in response to detected movement that is predicted to become problematic.

The instant you grab the laptop from the side and begin to lift it for example, heads get parked because the drive is designed to expect even worse gyrations are to come before it gets better, then it gives the heads the all-clear signal and they come back out to resume data operations.

Fabs are equipped with that kind of idea when it comes to parking tools and so forth based on seismic detection and prediction models.

It sounds fancy and high-tech but really its not considering that even your $60 bare-bones laptop drive sports the tech to make it happen, so your multi-billion dollar fabs do to.

But just like the case with the hard-drive that still dies when you drop your laptop onto the sidewalk, earthquakes still destroy the equipment in a fab when the epicenter is too close.