Posts Tagged ‘control’

The physical and economic limits of Moore’s Law are being approached as the commercial IC fab industry continues reducing device features to the atomic-scale. Early signs of such limits are seen when attempting to pattern the smallest possible features using lithography. Stochastic variation in the composition of the photoresist as well as in the number of incident photons combine to destroy determinism for the smallest devices in R&D. The most advanced Extreme Ultra-Violet (EUV) exposure tools from ASML cannot avoid this problem without reducing throughputs, and thereby increasing the cost of manufacturing.

Since the beginning of IC manufacturing over 50 years ago, chip production has been based on deterministic control of fabrication (fab) processes. Variations within process parameters could be controlled with statistics to ensure that all transistors on a chip performed nearly identically. Design rules could be set based on assumed in-fab distributions of CD and misalignment between layers to determine the final performance of transistors.

As the IC fab industry has evolved from micron-scale to nanometer-scale device production, the control of lithographic patterning has evolved to be able to bend-light at 193nm wavelength using Off-Axis Illumination (OAI) of Optical-Proximity Correction (OPC) mask features as part of Reticle Enhancement Technology (RET) to be able to print <40nm half-pitch (HP) line arrays with good definition. The most advanced masks and 193nm-immersion (193i) steppers today are able to focus more photons into each cubic-nanometer of photoresist to improve the contrast between exposed and non-exposed regions in the areal image. To avoid escalating cost and complexity of multi-patterning with 193i, the industry needs Extreme Ultra-Violet Lithography (EUVL) technology.

Figure 1 shows Dr. Britt Turkot, who has been leading Intel’s integration of EUVL since 1996, reassuring a standing-room-only crowd during a 2017 SPIE Advanced Lithography (http://spie.org/conferences-and-exhibitions/advanced-lithography) keynote address that the availability for manufacturing of EUVL steppers has been steadily improving. The new tools are close to 80% available for manufacturing, but they may need to process fewer wafers per hour to ensure high yielding final chips.

The KLA-Tencor Lithography Users Forum was held in San Jose on February 26 before the start of SPIE-AL; there, Turcot also provided a keynote address that mentioned the inherent stochastic issues associated with patterning 7nm-node device features. We must ensure zero defects within the 10 billion contacts needed in the most advanced ICs. Given 10 billion contacts it is statistically certain that some will be subject to 7-sigma fluctuations, and this leads to problems in controlling the limited number of EUV photons reaching the target area of a resist feature. The volume of resist material available to absorb EUV in a given area is reduced by the need to avoid pattern-collapse when aspect-ratios increase over 2:1; so 15nm half-pitch lines will generally be limited to just 30nm thick resist. “The current state of materials will not gate EUV,” said Turkot, “but we need better stochastics and control of shot-noise so that photoresist will not be a long-term limiter.”

From the LithoGuru blog of gentleman scientist Chris Mack (http://www.lithoguru.com/scientist/essays/Tennants_Law.html):

One reason why smaller pixels are harder to control is the stochastic effects of exposure: as you decrease the number of electrons (or photons) per pixel, the statistical uncertainty in the number of electrons or photons actually used goes up. The uncertainty produces line-width errors, most readily observed as line-width roughness (LWR). To combat the growing uncertainty in smaller pixels, a higher dose is required.

Figure 2 shows the stochastics within EUVL start with direct photolysis and include ionization and scattering within a given discrete photoresist volume, as reported by Solid State Technology in 2010.

Figure 2. Discrete acid generation in an EUV resist is based on photolysis as well as ionization and electron scattering; stochastic variations of each must be considered in minimally scaled areal images. (Source: Solid State Technology)

Resist R&D

During SPIE-AL this year, ASML provided an overview of the state of the craft in EUV resist R&D. There has been steady resolution improvement over 10 years with Photo-sensitive Chemically-Amplified Resists (PCAR) from 45nm to 13nm HP; however, 13nm HP needed 58 mJ/cm2, and provided DoF of 99nm with 4.4nm LWR. The recent non-PCAR Metal-Oxide Resist (MOR) from Inpria has been shown to resolve 12nm HP with 4.7 LWR using 38 mJ/cm2, and increasing exposure to 70 mJ/cm2 has produced 10nm HP L/S patterns.

In the EUVL tool with variable pupil control, reducing the pupil fill increases the contrast such that 20nm diameter contact holes with 3nm Local Critical-Dimension Uniformity (LCDU) can be done. The challenge is to get LCDU to <2nm to meet the specification for future chips. ASML’s announced next-generation N.A. >0.5 EUVL stepper will use anamorphic mirrors and masks which will double the illumination intensity per cm2 compared to today’s 0.33 N.A. tools. This will inherently improve the stochastics, when eventually ready after 2020.

The newest generation EUVL steppers use a membrane between the wafer and the optics so that any resist out-gassing cannot contaminate the mirrors, and this allow a much wider range of materials to be used as resists. Regarding MOR, there are 3.5 times more absorbed photons and 8 times more electrons generated per photon compared to PCAR. Metal hard-masks (HM) and other under-layers create reflections that have a significant effect on the LWR, requiring tuning of the materials in resist stacks.

Default R&D hub of the world imec has been testing EUV resists from five different suppliers, targeting 20 mJ/cm2 sensitivity with 30nm thickness for PCAR and 18nm thickness for MOR. All suppliers were able to deliver the requested resolution of 16nm HP line/space (L/S) patterns, yet all resists showed LWR >5nm. In another experiment, the dose to size for imec’s “7nm-node” metal-2 (M2) vias with nominal pitch of 53nm was ~60mJ/cm2. All else equal, three times slower lithography costs three times as much per wafer pass.

As detailed in Part 1 of this article published last month by SemiMD, the inaugural Critical Materials Council (CMC) Conference happened May 5-6 in Hillsboro, Oregon. Held just after the yearly private CMC meeting, the public CMC Conference provides a forum for the pre-competitive exchange of information to control the supply-chain of critical materials needed to run high-volume manufacturing (HVM) in IC fabs. The next CMC Conference will happen May 11-12 in Dallas, Texas.

At the end of the 2016 conference, a panel discussion moderated by Ed Korczynski was recorded and transcribed. The following is Part 2 of the conversation between the following industry experts:

KORCZYNSKI: We heard from David Thompson [EDITOR’S NOTE: Director of Process Chemistry, Applied Materials presented on “Agony in New Material Introductions - Minimizing and Correlating Variabilities”] today on what we must control, and he gave an example of a so-called trace-contaminant that was essential for the process performance of a precursor, where the trace compound helped prevent particles from flaking off chamber walls. Do we need to specify our contaminants?

GIRARD: Yes. To David’s point this morning, every molecule is different. Some are very tolerant due to the molecular process associated with it, and some are not. I’ll give you an example of a cobalt material that’s been talked about, where it can be run in production at perhaps 95% in terms of assay, provided that one specific contaminant is less than a couple of parts-per-million. So it’s a combination of both, it’s not assay OR a specification of impurities. It’s a matter of specifying the trace components that really matter when you reach the point that the data you gather gives you that understanding, and obviously an assay within control limits.

HEMPHILL: Talking about whether we’re over-specifying or not, the emphasis is not about putting the right number on known parameters like assay that are obvious to measure, the emphasis is on identifying and understanding what makes up the rest of it and in a sense trying over-specify that. You identify through mass-spectrometry and other techniques that some fraction of a percent is primarily say five different species, it’s finding out how to individually monitor and track and control those as separate parameters. So from a specification point of view what we want is not necessarily the lowest possible numbers, but it’s expanding how many things we’re looking at so that we’re capturing everything that’s there.

KORCZYNSKI: Is that something that you’re starting to push out to your suppliers?

HEMPHILL: Yes. It depends on the application we’re talking about, but we go into it with the assumption that just assay will not be enough. Whether a single molecule or a blend of things is supposed to be there, we know that just having those be controlled by specification will not be sufficient. We go under the assumption that we are going to identify what makes up the remaining part of the profile, and those components are going to need to be controlled as well.

KORCZYNSKI: Is that something that has changed by node? Back when things were simpler say at 45nm and larger, were these aspects of processing that we could safely ignore as ‘noise’ but are now important ‘signals’?

HEMPHILL: Yes, we certainly didn’t pay as close attention just a couple of generations ago.

KORCZYNSKI: That seems to lead us to questions about single-sources versus dual-sourcing. There are many good reasons to do both, but not simultaneously. However, it seems that because of all of the challenges we’re heard about over the last day-and-a-half of this conference it creates greater burden on the suppliers, and for critical materials the fabs are moving toward more single-sourcing over time.

SMYTHE: I think that it comes down to more of a concern over geographic risk. I’ll buy from one entity if that entity has more than one geographic location for the supply, so that I’m not exposed to a single ‘Act of God’ or a ‘random statistical occurrence of global warming.’ So for example I need to ask if a supplier has a place in the US and a place in France that makes the same thing, so that if something bad happens in one location it can still be sourced? Or do you have an alternate-supply agreement that if you can’t supply it you have an agreement with Company-X to supply it so that you still have control? You can’t come to a Micron and say we want to make sure that we get at minimum 25% no matter what, because what typically happens with second-sourcing is Company-A gets 75% of the business while Company-B gets 25%. There are a lot of reasons that that doesn’t work so well, so people may have an impression that there’s a movement toward single-source but it’s ‘single flexible-source.’

HEMPHILL: There are a lot of benefits of dual- or multiple-sourcing. The commercial benefits of competition can be positive and we’re for it when it works. The risk is that as things are progressing and we’re getting more sensitive to differences in materials it’s getting harder to maintain that. We have seen situations where historically we were successful with dual-sourcing a raw material coming from two different suppliers or even a single supplier using two different manufacturing lines and everything was fine and qualified and we could alternate sources invisibly. However, as our sensitivity has grown over time we can start to detect differences.

So the concept of being ‘copy-exactly’ that we use in our factories, we really need production lines to do that, and if we’re talking about two different companies producing the same material then we’re not going to get them to be copy-exactly. When that results in enough of a variation in the material that we can detect it in the factory then we cannot rely upon two sources. Our preference would be one company that maintains multiple production sites that are designed to be exactly the same, then we have a high degree of confidence that they will be able to produce the same material.

GIRARD: I can give you a supplier perspective on that. We are seeing very different policies from different customers, to the point that we’re seeing an increase in the number of customers doing single-sourcing with us, provided we can show the ability to maintain business continuity in case of a problem. I think that the industry became mature after the tragic earthquake and tsunami in Japan in 2011 with greater understanding of what business continuity means. We have the same discussions with our own suppliers, who may say that they have a dedicated reactor for a certain product with another backup reactor with a certain capacity on the same site, and we ask what happens if the plant goes on strike or there’s a fire there?

A situation where you might think the supply was stable involved silane in the United States. There are two large silane plants in the United States that are very far apart from each other and many Asian manufacturers dependent upon them. When the U.S. harbors went on strike for a long time there was no way that material could ship out of the U.S. customers. So, yes there were two plants but in such an event you wouldn’t have global supply. So there is no one way to manage our supply lines and we need to have conversations with our customers to discuss the risks. How much time would it take to rebuild a supply-chain source with someone else? If you can get that sort of constructive discussion going then customers are usually open to single-sourcing. One regional aspect is that Asian customers tend to favor dual-sourcing more, but that can lead to IP problems.

[DISCLOSURE: Ed Korczynski is co-chair of the CMC Conference, and Marketing Director of TECHCET CA the advisory services firm that administers the Critical Materials Council (CMC).]

The inaugural Critical Materials Council (CMC) Conference, co-sponsored by Solid State Technology, happened May 5-6 in Hillsboro, Oregon. Held just after the yearly private CMC meeting, the public CMC Conference provides a forum for the pre-competitive exchange of information to control the supply-chain of critical materials needed to run high-volume manufacturing (HVM) in IC fabs. The next CMC Conference will happen May 11-12 in Dallas, Texas.

At the end of the 2016 conference, a panel discussion moderated by Ed Korczynski was recorded and transcribed. The following is an edited excerpt of the conversation between the following industry experts:

John Smythe, Distinguished Member of Technical Staff, Micron Technology.

KORCZYNSKI: Let’s start with specifications: over-specifying, and under-specifying. Do we have the right methodologies to be able to estimate the approximate ‘ball-park’ range that the impurities need to be in?

GIRARD: For determining the specifications, to some extent it doesn’t matter because we are out of the world of specs, where what matters is the control-limits. To Tim Hendry’s point in the Keynote yesterday [EDITOR’S NOTE: Tim G. Hendrey, vice president of the Technology and Manufacturing Group and director of Fab Materials at Intel Corporation provided a conference keynote address on “Process Control Methods for Advanced Materials”], what was really interesting is instead of the common belief that we should start by supplying the product with the lowest possible variability, instead we should try to explore the window in which the product is working. So getting 10 containers from the same batch and introducing deliberate variability so that you know the process space in which you can play. That is the most important information to be able to reach the most reasonable and data-driven numbers to specify control limits. A lot of specs in the past were primarily determined by marketing decisions instead of data.

SUNDQVIST: Like the first introduction of what were called “super-clean” ALD precursors for the original MIS DRAM capacitors, Samsung used about 10nm of hafnium-aluminate and it would not matter if there was slight contamination in the precursors because you were not trying to control for a specific high-k phase. Whereas now you are doping very precisely and you have already scaled thinness so over time the specification for high-k precursors has become more important.

SMYTHE: I think it comes down to the premise that when you are doing vapor transport through a bubbler that some would argue that that’s like a distillation column. So it’s a matter of thinking about what is transporting and what isn’t. In some cases the contaminant you’re concerned about is in the ampule but it never makes it to the process chamber, or the act of oxidizing destroys it as a volatile byproduct. So I think the bigger issue is change-management not necessarily the exact specification. You must know what you have, and agree that a single adjustment to improve the productivity of chemical synthesis requires that ‘fingerprinting’ must be done to show the same results. The argument is that you do not accept “less-than” as part of a specification, you only accept what it is.

AUDIENCE QUESTION: The systems in which these precursors are used also have ‘memory’ based on the prior reactions in the chamber and byproducts that get absorbed on walls. When these byproducts come out in subsequent processing they can alter conditions so that you’re actually running in CVD-mode instead of ALD-mode. Chamber effects can wash-out a lot of value of having really pure chemicals moving through a delivery system into a chamber and picking up contaminants that you spent a whole lot of money taking out at the point of delivery. What do you think about that?

GIRARD: Well, this is a ‘crisis!’ When something like this starts to happen in a fab or even during the development cycles, you can’t prioritize resources and approaches you just have to do everything. Sometimes it’s the tool, sometimes it’s the chemical, sometimes it’s the interaction of the two, sometimes it’s back-streaming from the vacuum sub-system…there are so many ways that things can go wrong. Certainly you have to clear up the chemistry part as early as possible.

SUNDQVIST: We work with zirconium precursors for ALD, and you can develop a precursor that gives you a very pure ALD process that really works like an ALD process should. However, you can still use the TEMA-Zr precursor, that in processing has a CVD component which you can use that to gain throughput. So you can have a really good ALD precursor that gives low particle-counts and good process stability and ideal thermal processing range, but the growth rate goes down by 20% so you’re not very popular in the fab. Many things change when you make an ‘improved’ molecule to perfect the process, and sometime you want to use an imperfect part of the process.

FIGURE 2: John Smythe, Distinguished Member of Technical Staff of Micron Technology, explains approaches to controlling materials all the way to point-of-use. (Source: TECHCET CA)

SMYTHE: What we’re doing a lot more these days is doing chamber finger-printing, where we’re putting a quad-filtered mass-spec on each chamber—not a cheap little RGA, but real analytical-grade—and it’s been enlightening. If you look at your chemistry moving through a delivery line using something like the Schrødenger software, it’s not a big deal to see that you can use the mass spec to see some synthesis happening in the line. We joke and call it ‘point of use synthesis’ but it’s not very funny. We are used to having spare delivery lines built-in so we can install tools to try to gain insights to prevent what we’ve been talking about.

KORCZYNSKI: John, since Micron has fabs in Lehi and fabs in Singapore and other places, while they do run different product loads, do you have to worry about how long it takes things to travel on a slow boat to Singapore? Do you have to stockpile things more strategically these days, and does that effect your receiving department?

SMYTHE: What we really need are a few good ocean-going hydrofoil ships! The most complete answer is we first identify which things need ‘batch-qual’ so if we do a batch-qual in Virginia and know that material is going to Taiwan that we have confidence it will pass batch-qual in Taiwan. There are certain materials that we require information on which synthesis batch, which production batch, and sometimes which bottling batch. Sometimes you take a yield hit because you didn’t have the right vision, and then you institute batch qual.

I think most of you are familiar with the concept of ‘ship-to-stock,’ when you have enough good statistical history and a good change management process with the supplier then you can do ship-to-stock and that reduces the batch-qual overhead. On a case by case basis you have to figure out how difficult that is. A small story I can tell is that with Block Co-Polymer (BCP) self-assembly we found one particular element that in concentration above 5 ppm prevented the poly-styrene from self-assembling in the same way, whereas other metal trace contaminants could be a hundred times higher and have no effect on the process. So this gets back to some of our earlier discussion that it’s not enough to know that your trace elements are below some level. Tell me the exact atoms and the exact counts and then we’ll talk about using them. The BCP R&D taught us that in some situations just changing from one batch to the next could increase defects a thousands times. So we will see a bigger push to counting atoms.

[DISCLOSURE: Ed Korczynski is co-chair of the CMC Conference, and Marketing Director of TECHCET CA the advisory services firm that administers the Critical Materials Council (CMC).]

In-the-know attendees at SEMICON West at a Thursday morning working breakfast heard from executives representing the world’s leading memory fabs discuss manufacturing challenges at the 4th annual Entegris Yield Forum. Among the excellent presenters was Norm Armour, managing director worldwide facilities and corporate EHSS of Micron. Armour has been responsible for some of the most famous fabs in the world, including the Malta, New York logic fab of GlobalFoundries, and AMD’s Fab25 in Austin, Texas. He discussed how facilities systems effect yield and parametric control in the fab.

Just recently, his organization within Micron broke records working with M&W on the new flagship Fab 10X in Singapore—now running 3D-NAND—by going from ground-breaking to first-tool-in in less than 12 months, followed by over 400 tools installed in 3 months. “The devil is in the details across the board, especially for 20nm and below,” declared Armour. “Fabs are delicate ecosystems. I’ll give a few examples from a high-volume fab of things that you would never expect to see, of component-level failures that caused major yield crashes.”

Ultra-Pure Water (UPW)

Ultra-Pure Water (UPW) is critical for IC fab processes including cleaning, etching, CMP, and immersion lithography, and contamination specs are now at the part-per-billion (ppb) or part-per-trillion (ppt) levels. Use of online monitoring is mandatory to mitigate risk of contamination. International Technology Roadmap for Semiconductors (ITRS) guidelines for UPW quality (minimum acceptable standard) include the following critical parameters:

Resistivity @ 25C >18.0 Mohm-cm,

TOC <1.0 ppb,

Particles/ml < 0.3 @ 0.05 um, and

Bacteria by culture 1000 ml <1.

In one case associated with a gate cleaning tool, elevated levels of zinc were detected with lots that had passed through one particular tool for a variation on a classic SC1 wet clean. High-purity chemistries were eliminated as sources based on analytical testing, so the root-cause analysis shifted to to the UPW system as a possible source. Then statistical analysis could show a positive correlation between UPW supply lines equipped with pressure regulators and the zinc exposure. The pressure regulator vendor confirmed use of zinc-oxide and zinc-stearate as part of the assembly process of the pressure regulator. “It was really a curing agent for an elastomer diaphragm that caused the contamination of multiple lots,” confided Armour.

UPW pressure regulators are just one of many components used in facilities builds that can significantly degrade fab yield. It is critical to implement a rigorous component testing and qualification process prior to component installation and widespread use. “Don’t take anything for granted,” advised Armour. “Things like UPW regulators have a first-order impact upon yield and they need to be characterized carefully, especially during new fab construction and fit up.”

Photoresist filtration

Photoresist filtration has always been important to ensure high yield in manufacturing, but it has become ultra-critical for lithography at the 20nm node and below. Dependable filtration is particularly important because industry lacks in-line monitoring technology capable of detecting particles in the range below ~40nm.

Micron tried using filters with 50nm pore diameters for a 20nm node process…and saw excessive yield losses along with extreme yield variability. “We characterized pressure-drop as a function of flow-rate, and looked at various filter performances for both 20nm and 40nm particles,” explained Armour. “We implemented a new filter, and lo and behold saw a step function increase in our yields. Defect densities dropped dramatically.” Tracking the yields over time showed that the variability was significantly reduced around the higher yield-entitlement level.

Airborne Molecular Contamination (AMC)

Airborne Molecular Contamination (AMC) is ‘public enemy number one’ in 20nm-node and below fabs around the world. “In one case there were forrest fires in Sumatra and the smoke was going into the atmosphere and actually went into our air intakes in a high volume fab in Taiwan thousands of miles away, and we saw a spike in hydrogen-sulfide,” confided Armour. “It increased our copper CMP defects, due to copper migration. After we installed higher-quality AMC filters for the make-up air units we saw dramatic improvement in copper defects. So what is most important is that you have real-time on-line monitoring of AMC levels.”

Building collaborative relationships with vendors is critical for troubleshooting component issues and improving component quality. “Partnering with suppliers like Entegris is absolutely essential,” continued Armour. “On AMCs for example, we have had a very close partnership that developed out of a team working together at our Inotera fab in Taiwan. There are thousands of important technologies that we need to leverage now to guarantee high yields in leading-node fabs.” The Figure shows just some of the AMCs that must be monitored in real-time.

Big Data

The only way to manage all of this complexity is with “Big Data” and in addition to primary process parameter that must be tracked there are many essential facilities inputs to analytics:

“Conventional wisdom is that process tools create 90% of your defect density loss, but that’s changing toward facilities now,” said Armour. “So why not apply the same methodologies within facilities that we do in the fab?” SPC is after-the-fact reactive, while APC is real-time fault detection on input variables, including such parameters as vibration or flow-rate of a pump.

“Never enough data,” enthused Armour. “In terms of monitoring input variables, we do this through the PLCs and basically use SCADA to do the fault-detection interdiction on the critical input variables. This has been proven to be highly effective, providing a lot of protection, and letting me sleep better at night.”

Micron also uses these data to provide site-to-site comparisons. “We basically drive our laggard sites to meet our world-class sites in terms of reducing variation on facility input variables,” explained Armour. “We’re improving our forecasting as a result of this capability, and ultimately protecting our fab yields. Again, the last thing a fab manager wants to see is facilities causing yield loss and variation.”

Semiconductor integrated circuit (IC) manufacturing has always relied upon the supply of critical materials from a global supply chain. Now that shrinks of IC feature sizes have begun to reach economic limits, future functionality improvements in ICs are increasingly derived from the use of new materials. The Critical Materials Conference 2016—to be held May 5-6 in Hillsboro, Oregon (cmcfabs.org)—will explore best practices in the integration of novel materials into manufacturing. Dr. David Thompson, Senior Director, Center of Excellence in Chemistry, Applied Materials will present on “Agony in New Material Introductions – minimizing and correlating variabilities,” which he was willing to discuss in advance with SemiMD.

Korczynski: With more and more materials being considered for use in high-volume manufacturing (HVM) of advanced ICs, how do you begin to selectively screen out materials that will not work for one reason or another to be able to reach the best new material for a target application?

Thompson: While there’s ‘no one size fits all’ solution to this, it typically starts with a review of what’s available and known about the current offerings. With respect to the talk at the CMC, we’ll review the challenges we run into after the materials system and chemistries are set and have been proven generally viable, but still require significant optimization in order to get acceptable yields for manufacturing. It’s a very long road from device proof of concept on a new materials system to a viable manufacturing process.

Korczynski: Since new materials are being considered for use on the atomic-scale in advanced devices, doesn’t all of this have to be done with control at the atomic scale?

Thompson: For the material on the chip, many mainstream analytical techniques are used to achieve atomic level control including TEMs and AFMs with atomic resolution during film development for many applications. Unfortunately, this resolution is not available for the chemicals we’re relying on to deposit these materials. For a typical precursor that weighs in the 200 Dalton range, a gram of precursor may have 5 × 1020 molecules. That’s a lot of molecules. Even with ppb (109) resolutions on analytical, you’re still dealing with invisible populations of >1010 molecules. It gets worse. While trace metals analysis can hit ppb levels, molecular analysis techniques are typically limited in the 0.1 to 0.01 percent resolutions for most semiconductor precursors and there may be impurities which are invisible to routine analytical techniques.

Ultimately, we rely on analytical techniques to control the gross parameters and disciplined process controls to verify suppliers produce the same compositions the same way, and to manage impurities. On the process and hardware side, it’s like threading the needle trying to get the right film at the right throughput, in a process space that’s as tolerant as possible to the inevitable variability in these chemistries.

Korczynski: With all of this investment in developing one specialty material supplier for advanced IC manufacturing, what is the cost to develop and qualify a second source?

Thompson: Generally, it’s not sustainable to release a product with dual specialty material sources. The problem with dual-sourcing is chemical suppliers protect their knowledge—not simple IP—but also their sub-supply-chains and proprietary methods of production, transport and delivery. However, given how trace elements in the formulation can change depending on conditions the molecules experience over time, the customer in many cases needs to develop two separate sub-recipes based on the specific vendor’s chemistry they are using. So, redundancy in the supply chain is prudent as is making sure the vendor can produce the material in different locations.

There are countless examples over the last 20 years of what I like to call ‘the agony of the supply-chain’ when a process got locked into using a material when the only supply was from a Ph.D. chemist making it in small batches in a lab. In most cases the initial batch of any new molecule is made at a scale that would fit in a coffee mug. Sometimes though scaling up the first industrial-scale batch can alter impurity factors that change yields on the wafer even with improved purification. So while a customer would like to keep using a small batch production, it’s not sustainable but trying to qualify a second vendor in this environment presents significant challenges.

Korczynski: Can you share an example with us of how your team brought a source of subtle variation under control?

Thompson: We had a process using a new metal film, and in the early development everything looked great. Eventually we observed a drift of process results that was more pronounced with some ampoules and less so with others. The root cause initially eluded us. Then, a bright Ph.D. on our team said it’s interesting that the supplier did not report a particular contaminant that would tend to be present as a byproduct of the reaction. The supplier confirmed it was present and variable at concentrations in the 100-300 ppm concentration in the blend. This contaminant was relatively more volatile than the main component due to vapor pressure differences and much more reactive with the substrate/wafer. It was found this variability in the chemistry induced the process variation on the wafer (as shown in Figure 1).

Chasing impurities and understanding their impact requires rigor and a lot of data collection. There’s no Star Trek analyzer we can use to give us knowledge of all impurities present and the role of those impurities on the process. Many impurities are invisible to routine analytical techniques, so we work very closely with vendors to establish a chemistry analytical protocol for each precursor that may consist of 5-10 different techniques. For the impurities we can’t detect we rely on excellent manufacturing process control and sub-supply sourcing management.

Korczynski: Is the supply-chain for advanced precursors for deposition and etch supplying everything we need in early R&D?

Thompson: New precursor ideation—the science that leads to new classes of compounds with new reactivity that Roy Gordon, or more recently Chuck Winter, have been doing in academia is critically important and while there are a few academics doing excellent work in this space, in general there’s not enough focus on this topic.While we see many IP protected molecules, too often they are obvious simple modifications to one skilled in the art, consisting of merely adding a functional group off of a ring, or mixing and matching known ligand systems. We don’t see a lot of disruptive chemistries. The industry is hunting for differentiated reactivity, and evolutionary precursor development approaches generally aren’t sufficiently disruptive. While this research is useful in terms of tuning a vapor pressure or thermal stability it only very rarely produces a differentiated reactivity.

Korczynski: Do we need new methodologies to more efficiently manage all of this?

Thompson: Applied has made significant investments over the last 5 years to help accelerate the readiness of new materials across the board. One of the best things about working at Applied is the rate at which we can learn and build an ecosystem around a new material. With our strength in chemistry, deposition, CMP, etch, metrology and a host of other technologies, we get a fast, strong feedback loop going to accelerate issue discovery, resolution and general learning around new materials.

On the chemical supply-chain front, the need is making sure that chemical vendors accelerate their analytical chemistry development on new materials. Correlating the variability of chemistry to process results and ultimately yield is the real battle. The more knowledge we have of a chemistry moving into development, the faster learning can occur. I explain to my team that we can’t be proactive and respond to things we didn’t anticipate. Situations where trying to develop the analytical technique to see the impurity responsible for causing (or resolving) a variability is to start out at a significant disadvantage. However, we’ve seen a good response from suppliers on new materials and significant improvement on the early learnings necessary to minimize the agony of new material introductions.

In an exclusive interview with Solid State Technology during SPIE-AL this year, imec Advanced Patterning Department Director Greg McIntyre said, “The big encouraging thing at the conference is the progress on EUV.” The event included a plenary presentation by TSMC Nanopatterning Technology Infrastructure Division Director and SPIE Fellow Anthony Yen on “EUV Lithography: From the Very Beginning to the Eve of Manufacturing.” TSMC is currently learning about EUVL using 10nm- and 7nm-node device test structures, with plans to deploy it for high volume manufacturing (HVM) of contact holes at the 5nm node. Intel researchers confirm that they plan to use EUVL in HVM for the 7nm node.

Recent improvements in EUV source technology— 80W source power had been shown by the end of 2014, 185W by the end of 2015, and 200W has now been shown by ASML—have been enabled by multiple laser pulses tuned to the best produce plasma from tin droplets. TSMC reports that 518 wafers per day were processed by their ASML EUV stepper, and the tool was available ~70% of the time. TSMC shows that a single EUVL process can create 46nm pitch lines/spaces using a complex 2D mask, as is needed for patterning the metal2 layer within multilevel on-chip interconnects.

To improve throughput in HVM, the resist sensitivity to the 13.54nm wavelength radiation of EUV needs to be improved, while the line-width roughness (LWR) specification must be held to low single-digit nm. With a 250W source and 25 mJ/cm2 resist sensitivity an EUV stepper should be able to process ~100 wafer-per-hour (wph), which should allow for affordable use when matched with other lithography technologies.

Researchers from Inpria—the company working on metal-oxide-based EUVL resists—looked at the absorption efficiencies of different resists, and found that the absorption of the metal oxide based resists was ≈ 4 to 5 times higher than that of the Chemically-Amplified Resist (CAR). The Figure shows that higher absorption allows for the use of proportionally thinner resist, which mitigates the issue of line collapse. Resist as thin as 18nm has been patterned over a 70nm thin Spin-On Carbon (SOC) layer without the need for another Bottom Anti-Reflective Coating (BARC). Inpria today can supply 26 mJ/cm2 resist that creates 4.6nm LWR over 140nm Depth of Focus (DoF).

To prevent pattern collapse, the thickness of resist is reduced proportionally to the minimum half-pitch (HP) of lines/spaces. (Source: JSR Micro)

JEIDEC researchers presented their summary of the trade-off between sensitivity and LWR for metal-oxide-based EUV resists: ultra high sensitivity of 7 mJ/cm2 to pattern 17nm lines with 5.6nm LWR, or low sensitivity of 33 mJ/cm2 to pattern 23nm lines with 3.8nm LWR.

In a keynote presentation, Seong-Sue Kim of Samsung Electronics stated that, “Resist pattern defectivity remains the biggest issue. Metal-oxide resist development needs to be expedited.” The challenge is that defectivity at the nanometer-scale derives from “stochastics,” which means random processes that are not fully predictable.

Stochastics of Nanopatterning

Anna Lio, from Intel’s Portland Technology Development group, stated that the challenges of controlling resist stochastics, “could be the deal breaker.” Intel ran a 7-month test of vias made using EUVL, and found that via critical dimensions (CD), edge-placement-error (EPE), and chain resistances all showed good results compared to 193i. However, there are inherent control issues due to the random nature of phenomena involved in resist patterning: incident “photons”, absorption, freed electrons, acid generation, acid quenching, protection groups, development processes, etc.

Stochastics for novel chemistries can only be controlled by understanding in detail the sources of variability. From first-principles, EUV resist reactions are not photon-chemistry, but are really radiation-chemistry with many different radiation paths and electrons which can be generated. If every via in an advanced logic IC must work then the failure rate must be on the order of 1 part-per-trillion (ppt), and stochastic variability from non-homogeneous chemistries must be eliminated.

Consider that for a CAR designed for 15mJ/cm2 sensitivity, there will be just:

145 photons/nm2 for 193, and

10 photons/nm2 for EUV.

To improve sensitivity and suppress failures from photon shot-noise, we need to increase resist absorption, and also re-consider chemical amplification mechanisms. “The requirements will be the same for any resist and any chemistry,” reminded Lio. “We need to evaluate all resists at the same exposure levels and at the same rules, and look at different features to show stochastics like in the tails of distributions. Resolution is important but stochastics will rule our world at the dimensions we’re dealing with.”

Industrial Technology Research Institute (ITRI) (https://www.itri.org.tw/) worked with TSMC (http://www.tsmc.com) in Taiwan on a clever in-line monitor technology that transforms liquids and automatically-diluted-slurries into aerosols for subsequent airborn measurements. They call this “SuperSizer” technology, and claim that tests have shown resolution over the astounding range of 5nm to 1 micron, and with ability to accurately represent size distributions over that range. Any dissolved gas bubbles in the liquid are lost in the aerosol process, which allows the tool to unambiguously count solid impurities. The Figure shows the compact components within the tool that produce the aerosol.

Semiconductor fabrication (fab) lines require in-line measurement and control of particles in critical liquids and slurries. With the exception of those carefully added to chemical-mechanical planarization (CMP) slurries, most particles in fabs are accidental yield-killers that must be kept to an absolute minimum to ensure proper yield in IC fabs, and ever decreasing IC device feature sizes result in ever smaller particles that can kill a chip. Standard in-line tools to monitor particles rely on laser scattering through the liquid, and such technology allows for resolution of particle sizes as small as 40nm. Since we cannot control what we cannot measure, the IC fab industry needs this new ability to measure particles as small as 5nm for next-generation manufacturing.

There are two actual measurement technologies used downstream of the SuperSizer aerosol module: a differential mobility analyzer (DMA), and a condensation particle counter (CPC). The aerosol first moves through the DMA column, where particle sizes are measured based on the force balance between air flow speed in the axial direction and an electric field in the radial direction. The subsequent CPC then provides particle concentration data.

Combining both data streams properly allows for automated output of information on particle sizes down to 5nm, size distributions, and impurity concentrations in liquids. Since the tool is intended for monitoring semiconductor high-volume manufacturing (HVM), the measurement data is automatically categorized, analyzed, and reported according to the needs of the fab’s automated yield management system. Users can edit the measurement sequences or recipes to monitor different chemicals or slurries under different conditions and schedules.

When used to control a CMP process, the SuperSizer can be configured to measure not just impurities but also the essential slurry particles themselves. During dilution and homogeneous mixing of the slurry prior to aerosolization, mechanical agitation needs to be avoided so as to prevent particle agglomeration which causes scratch defects. This new tool uses pressured gas as the driving force for solution transporting and mixing, so that any measured agglomeration in the slurry can be assigned to a source somewhere else in the fab.

TSMC has been using this tool since 2014 to measure particles in solutions including slurries, chemicals, and ultra-pure water. ITRI, which owns the technology and related patents, can now take orders to manufacture the product, but the research organization plans to license the technology to a company in Taiwan for volume manufacturing. EETimes reports (http://www.eetimes.com/document.asp?doc_id=1328283) that the current list price for a tool capable of monitoring ultra-pure water is ~US$450k, while a fully-configured tool for CMP monitoring would cost over US$700k.

Applied Materials today unveiled the Applied Olympia ALD system, using thermal sequential-ALD technology for the high-volume manufacturing (HVM) of leading-edge 3D memory and logic chips. Strictly speaking this is a mini-batch tool, since four 300mm wafers are loaded onto a turn-table in the chamber that continuously rotates through four gas-isolated modular processing zones. Each zone can be configured to flow any arbitrary ALD precursor or to exposure the surface to Rapid-Thermal-Processing (RTP) illumination, so an extraordinary combination of ALD processes can be run in the tool. “What are the applications that will result from this? We don’t know yet because the world has never before had a tool which could provide these capabilities,” said David Chu, Strategic Marketing, Applied’s Dielectric Systems and Modules group.

Fig.1: The four zones within the Olympia sequential-ALD chamber can be configured to use any combination of precursors or treatments. (Source: Applied Materials)

Figure 1 shows that in addition to a high-throughput simple ALD process such that wafers would rotate through A-B-A-B precursors in sequence, or zones configured in an A-B-C-B sequence to produce a nano-laminate such as Zirconia-Alumina-Zirconia (ZAZ), almost any combination of pre- and post-treatments can be used. The gas-panel and chemical source sub-systems in the tool allow for the use up to 4 precursors. Consequently, Olympia opens the way to depositing the widest spectrum of next-generation atomic-scale conformal films including advanced patterning films, higher- and lower-k dielectrics, low-temperature films, and nano-laminates.

“The Olympia system overcomes fundamental limitations chipmakers are experiencing with conventional ALD technologies, such as reduced chemistry control of single-wafer solutions and long cycle times of furnaces,” Dr. Mukund Srinivasan, vice president and general manager of Applied’s Dielectric Systems and Modules group. “Because of this, we’re seeing strong market response, with Olympia systems installed at multiple customers to support their move to 10nm and beyond.” Future device structures will need more and more conformal ALD, as new materials will have to coat new 3D features.

When engineering even-smaller structures using ALD, thermal budgets inherently decrease to prevent atomic inter-diffusion. Compared to thermal ALD, Plasma-Enhanced ALD (PEALD) functions at reduced temperatures but tend to induce impurities in the film because of excess energy in the chamber. The ability of Olympia to do RTP for each sequentially deposited atomic-layer leads to final film properties that are inherently superior in defectivity levels to PEALD films at the same thermal budget: alumina, silica, silicon-nitride, titania, and titanium-nitride depositions into high aspect-ratio structures have been shown.

Purging (from the tool) pump-purge

Fab engineers who have to deal with ALD technology—from process to facilities—should be very happy working with Olympia because the precursors flow through the chamber continuously instead of having to use the pump-purge sequences typical of single-wafer and mini-batch ALD tools used for IC fabrication. Pump-purge sequences in ALD tools result in the following wastes:

* Wasted chemistry since tools generally shunt precursor-A past the chamber directly to the pump-line when precursor-B is flowing and vice-versa,

* More wasted chemistry because the entire chamber gets coated along with the wafer,

* Wasted device yield because precursors flowing in the same space at different times can accidentally overlap and create defects.

“Today there are chemistries that are more or less compatible with tools,” reminded Chu. “When you try to use less-compatible chemistries, the purge times in single-wafer tools really begin to reduce the productivity of the process. There are chemistries out there today that would be desirable to use that are not pursued due to the limitations of pump-purge chambers.”

Videos/Podcasts

Join Us For The ConFab 2017

The ConFab is a three-day conference and networking event for the semiconductor manufacturing and design industry decision-makers and influencers. The ConFab brings suppliers together with leaders at IC manufacturing companies, foundries, fabless companies, packaging houses or OSATs, as well as suppliers of MEMS and other types of electronics. Learn more at theconfab.com.

Resource Center

White Papers

The increasing demand for wireless data bandwidth and the emergence of LTE and LTE Advanced standards pushes radio-frequency (RF) IC designers to develop devices with higher levels of integrated RF functions, meeting more and more stringent specification levels. The substrates on which those devices are manufactured play a major role in achieving that level of performance.

Everybody’s talking about it, but just what is DFM? According to various EDA company websites, design for manufacturing can be: generation of yield optimized cells; layout compaction; wafer mapping optimization; planarity fill; or, statistical timing among other definitions. Obviously, there is very little consensus. For me, DFM is what makes my job hard: Characterizing it, and developing tools for it, is the most important item on my agenda.

In nanometer designs, the number of single vias, and the number of via transitions with minimal overlap, can contribute significantly to yield loss. Yet doubling every via leads to other yield-related problems and has a huge impact on design size. While there is still concern over of how many vias can be fixed without rerouting and without creating DRC violations, the Calibre via doubling tool can identify via transitions and recommend areas for second via insertion without increasing area.