After more than 2 years of development, Design-Driven Metrology (DDM) is now being introduced into production flows for semiconductor manufacturing, with initial applications targeted at 65 nm and below, but also backward-compatible to 90 nm and above nodes. This paper presents the fundamental components of the DDM framework, and the characteristic architectural relationships among these elements. The discussion includes current status and future prospects for this new metrology paradigm, which represents the true enabler for Design For Manufacturability (DFM) flows and applications. At the core of Design-Driven Metrology lies the simple but powerful concept of utilizing physical design layouts, and more specifically (X,Y) coordinates and polygonal shapes, to automate the generation of metrology jobs. Derived from 10 year old practices of Optical Proximity Correction, the adoption of CAD tools for visualization and manipulation of design layouts, in everyday lithography work, has provided the essential infrastructure for metrology automation. The in-depth discussion of data-flow and system architecture is followed by a presentation of key DDM applications, with specific emphasis on CDSEM metrology, ranging from process development and yield optimization to circuit design. The study concludes with an analysis of the extendibility of DDM and derived flows to other metrology areas in semiconductor manufacturing.

Optimal Proximity Correction (OPC) models are calibrated with Scanning Electron Microscope (SEM) data where the measurement uncertainty vary among pattern types (i.e., line versus space, 1D versus 2D and small versus large). The quality of the SEM measurement uncertainty's impact on OPC model integrity is mitigated through a weighting scheme. Statistical methods such as relating the weight to the SEM measurements standard deviation require more measurements per calibration structure than economically feasible. Similarly, the use of experience and engineering judgment requires many iterations before some reasonable weighting scale is determined. In this paper we present the results of OPC model fitness statistics associated with metrology based weights (MtBW) versus model based weights (MBW). The motivation for the latter approach is the promise for an unbiased, consistent, and efficient estimate of the model parameters.

As described by the ITRS roadmap [1], introduction of next generation processes in semiconductor fabrication continually requires tighter control in order to insure optimal device characteristics. Recent process development has shown an increased amount of charged layers, which in turn affects the inline critical dimension scanning electron microscope's (CD-SEM) ability to generate quality measurements thereby impacting process control. This paper reports on the investigation of techniques to measure and compensate for this charge dynamically to yield quality measurements. New capabilities of the CD-SEM were evaluated and tested at various process steps including processing steps not measured by the CD SEM. This capability not only means the CD-SEMs are essentially immune to charged layer affects but the capability can also be used to feedback to other tool-sets suspected of causing the charge build-up. These charge measurements help provide an understanding on how the device performance might be impacted. In order to establish charged wafer monitoring in the future along with feedback loops, studies of the reproducibility and the persistency of the charge across sequential processes in the back-end layers have been made. Studies were also conducted to determine the origin of the charge by observing the distribution before and after known problematic process steps.

How to effectively control the critical dimension (CD) is always a hot topic in photolithography. In 65nm node using phase shift mask (PSM) techniques, any factors related to CD variations should not be ignored without full investigation due to the ever-decreasing CD budget. In this paper, we focus on the local CD variation (LCDV) at the gate level within an area of 200μm x 200μm printed on a 193nm exposure tool. In contrast with AWLV (across wafer line variation) and ACLV (across chip line variation), the more localized LCDV implies that it is more dependent on the following three major factors: i) local wafer flatness mainly dominated by STI (shallow trench isolation) steps after CMP (chemical mechanical polishing); ii) effectiveness of OPC (optical proximity correction) covering all transistors with different geometrical shapes in circuit layout and iii) line edge roughness (LER) and line width roughness (LWR) related to photo and etch processes. Although OPC errors, LER and LWR are also very important, the current discussion will be limited in characterizing the relationship between LCDV and STI step-height (S-H) due to the length limitation. The STI S-H between the active surface and the trench oxide surface always exists due to the different material selectivity in the CMP process. The major gate CD influences from STI S-H are strongly correlated to the different geometrical shapes of transistors in circuits, such as single/multi-finger, wide/narrow, interior/exterior-flare and etc. According to our experiments and simulations from both alt-PSM (alternating PSM) and att-PSM (attenuating PSM) processes, the following important conclusions can be derived. a) The gate CDs in two PSM processes show different sensitivities to STI S-Hs in different geometrical shapes of transistors in circuit layout. The alt-PSM process is more sensitive than the att-PSM, especially for isolate gates. This is a shortcoming for the alt-PSM process in effectively controlling the LCDV. b) STI S-H usually makes the CD larger in both PSM processes, especially for the isolated gates in the alt-PSM process. From our observations, it is generally true that the narrower the transistor width, the higher the gate CD will be. However, CD variation trends in the att-PSM process are not so explicit as observed with alt-PSM. c) One should be very careful when trying to improve the CD uniformity by reducing STI step-height by using a blanket etch back because OPC errors are tightly combined with STI step-heights. d) Improving the STI S-H uniformity is always welcome because it will improve the AWLV. e) The narrow isolated gate is the best CD feature to monitor the interaction of AWLV with STI S-H uniformity.

Comprehensive CD characterization of low-k trench etch for 65nm nodes are performed through a specially
designed mask with global pattern density (GPD) in the range from 25% to 60%. Unlike traditional means, through this
mask we systematically demonstrate global pattern density effects on etch behaviors in correlation with CD uniformity,
CD proximity, and CD linearity without local etch loading effect contributed from nearby environment [1-3] and
position dependent effect contributed from resist developing or aberrations of the wafer-imaging lens [4]. From our
study, CD proximity is the most sensitive item. Wider trench shows larger CD variation as compared with narrow trench
when global environment vary. Moreover, we find that low pressure etch conditions in a small chamber volume etcher
exhibits less CD variation of global pattern density effect. On the other hand, pressure in a large chamber volume etcher
provides better tuning capability in the adjustment of CD variation. The results suggest that residence time might be an
influential factor for the GPD dependent CD control.

Without the ability to detect potential yield-limiting defects in-line, the yield learning cycle is severely crippled, compromising the financial success of chip makers. As design rules shrink, device yield is seriously affected by smaller size particle and patterned defects that were not important in the past. These mechanisms are becoming more difficult to detect with current defect detection tools and techniques. The optical defect inspection tools that are currently available do not adequately detect defects, while scanning electron microscope (SEM) based inspection tools are too slow. With each successive technology node, optical inspection becomes less capable relative to the previous technology. As sensitivity is increased to detect smaller defects, the nuisance defect rate increases commensurately. Line-edge roughness (LER) and subtle process variations are making it more difficult to detect defects of interest (DOI). Smaller defects mean smaller samples available for energy dispersive x-ray analysis (EDX), necessitating an improved or new methodology for elemental analysis. This paper reviews these and some other challenges facing defect metrology at the 45nm technology node and beyond. The challenges in areas of patterned and unpatterned wafer inspection, defect review, and defect characterization are outlined along with proposed solutions. It also provides an overview of several ongoing projects conducted at International SEMATECH Manufacturing Initiative (ISMI) to address these challenges.

In order to stay competitive in the rapidly advancing international semiconductor industry, a manufacturing company needs to continually focus on several areas including rapid yield learning, manufacturing cost, statistical process control limits, process yield, equipment availability, cycle time, turns per direct labor hour, customer on time delivery and zero customer defects. To hold a competitive position in the semiconductor market, performance to these measurable factors mut be maintained regardless of the technology generation. In this presentation, the methodology applied by Freescale Semiconductor to achieve the fastest yield learning curve in the industry, as cited by Dr. Robert Leachman of UC Berkley in 2003, will be discussed.

Electrical failure due to incomplete contacts or vias has arisen as one of the primary modes of yield loss for 130 nm and below designs in manufacturing. Such failures are generally understood to arise from both random and systematic sources. The addition of redundant vias, where possible, has long been an accepted DFM practice for mitigating the impact of random defects. Incomplete vias are often characterized by having a diameter near the target dimension but a depth of less than 100% of target. As such, it is a difficult problem to diagnose and debug in-line, since bright and dark field optical inspection systems cannot typically distinguish between a closed, partially open and fully open contact. Advanced metrology systems have emerged in recent years to meet this challenge, but no perfect manufacturing solution has yet been identified for full field verification of all contacts. Voltage Contrast (VC) SEM metrology biases the wafer to directly measure electrical conductivity after fill / polish, and can therefore easily discern a lack of electrical connection to the underlying conductor caused by incomplete photo, etch, or fill processing. While an entire wafer can in principal be VC scanned, throughput limitations dictate very sparse sampling in manufacturing. SEM profile grading (PG) leverages the rich content of the secondary electron waveform to decipher information about the bottom of the contact. Several authors have demonstrated an excellent response of the Profile Grade to intentional defocus vectors. However, the SEM can only target discreet or single digit groupings of contacts, and therefore requires intelligent guidance to identify those contacts which are most prone to failure, enabling protection of the fab WIP. An a-priori knowledge of which specific contacts in a layout are most likely to fail would prove very useful for proactive inspection in manufacturing. Model based pre-manufacturing verification allows for such knowledge to be communicated to manufacturing. This paper will focus on 130 nm node contact patterning, and will correlate SEM Profile Grade output to the extensive suite of model-based image tags from the CalibreTM opc-verification engine. With an understanding of which image parameters are most highly correlated to the occurrence of incomplete contact formation for a given process, the process model can be used to automatically direct inspection metrology to those layout instances that pose the highest risk of patterning failure through the lithographic process window. Such an approach maximizes the value content of in-line metrology.

Missing via has been a very annoying defect in semiconductor manufacture especially to foundry. Its solution can be rather attractive in yield improvement for relatively mature technology since each percentage point improvement will mean significant profit margin enhancement. However, the root cause for the missing via defect is not easy find since many factors, such as, defocus, material re-deposition, and inadequate developing can lead to missing via defects. Therefore, being able to know the exact cause for each defect type is the key to the solution of the problem. In this paper, we will present the analysis methodology used in our company. In the experiments, we have observed three types of missing vias. The first type consists of large areas, usually circular, of missing patterns, which are primarily located near wafer edge. The second type consists of isolated sites with single partially opened vias or completely unopened vias. The third type consists of relatively small circular areas, within which the entire via pattern is missing. We have first tried the optimization of the developing recipe and found that the first type missing via can be largely removed through the tuning of the rinse process, which improves the cleaning efficiency of the developing residue. However, this method does not remove the missing via of the second type, or the third type. For the second type missing via, we have found that it is related to local defocus caused by topographical distribution. To resolve the third type missing via defects, we have performed extensive experiments with different types of developer nozzles and different types of photomasks and the result is that we have not found any distinct dependence of the defect density to either the nozzle and mask types. Besides, we have also studied the defect density from three resists with different resolution capability and we found a correlation between the defect density and the resist resolution. It seems that, in general, lower resolution resist also has lower defect density and the results will be presented in the paper.

Design Based Metrology (DBM) implements a novel automation flow, which allows for a direct
and traceable correspondence to be established between selected locations in product designs and
matching metrology locations on silicon wafers. Thus DBM constitutes the fundamental enabler of
Design For Manufacturability (DFM), because of its intrinsic ability to characterize and quantify the
discrepancy between design layout intent and actual patterns on silicon. The evolution of the CDSEM
into a DFM tool, capable of measuring thousands of unique sites, includes 3 essential
functionalities: (1) seamless integration with design layout and locations coordinate system; (2) new
design-based pattern recognition and (3) fully automated recipe generation. Additionally advanced
SEM metrology algorithms are required for complex 2-dimensional features, Line-Edge-Roughness
(LER), etc. In this paper, we consider the overall DBM flow, its integration with traditional CDSEM
metrology and the state-of-the-art in recipe automation success. We also investigate advanced
DFM applications, specifically enabled by DBM, particularly for OPC model calibration and
verification, design-driven RET development and parametric Design Rule evaluation and selection.

There are numerous metrology challenges facing photolithography for the 45 nm technology node and beyond in the
areas of critical dimension (CD), overlay and defect metrology. Many of these challenges are identified in the 2005
International Technology Roadmap for Semiconductors (ITRS) [1]. The Lithography and Metrology sections of the
ITRS call for measurement of 45/32/22/18 nm generation linewidth and overlay. Each subsequent technology generation
requires less variation in CD linewidth and overlay control, which results in a continuing need for improved metrology
precision. In addition, there is an increasing need to understand individual edge variation and edge placement errors
relative to the intended design. This is accelerating the need for new methods of CD and overlay measurement, as well
as new target structures. This paper will provide a comprehensive overview of the CD and overlay metrology challenges
for photolithography, taking into account the areas addressed in the 2005 ITRS for the 45 nm technology generation and
beyond.

A potential limitation to a wider usage of the scatterometry technique for CD evaluation comes from its requirement of
dedicated regular measurement gratings, located in wafer scribe lanes. In fact, the simplification of the original chip
layout that is often requested to design these gratings may impact on their printed dimension and shape. Etched gratings
might also suffer from micro-loading effects other than in the circuit. For all these reasons, measurements collected
therein may not represent the real behavior of the device. On the other hand, memory devices come with large sectors
that usually possess the characteristics required for a proper scatterometry evaluation. In particular, for a leading edge
flash process this approach is in principle feasible for the most critical process steps. The impact of potential drawbacks,
mainly lack of pattern regularity within the tool probe area, is investigated. More, a very large sampling plan on features
with equal nominal CD and density spread over the same exposure shot becomes feasible, thus yielding a deeper insight
of the overall lithographic process window and a quantitative method to evaluate process equipment performance along
time by comparison to acceptance data and/or last preventive maintenance. All the results gathered in the device main
array are compared to those collected in standard scatterometry targets, tailored to the characteristics of the considered
layers in terms of designed CD, pitch, stack and orientation.

In this paper, we present a study on the robustness comparison of several process feedbacks controllers. The
feedbacks include those based on either EWMA or Kalman Filter estimation. In addition, a new multiple
dimension feedback controller is introduced, which has a significantly improved robust stability and reduced
sensitivity to unknown noise. In the robustness study, we assume model mismatch and unknown disturbances.
Two issues of robustness are addressed in this paper, namely the region of model mismatch in which a process
feedback is stable; and the H-infinity gain of the controlled process from unknown noise to the system performance.
Simulations are shown to compare the performance of the feedbacks under model mismatch, system drifting, and
random noise.

For several years, integrated scatterometry has held the promise of wafer-level process control. While integrated scatterometry on lithography systems is being used in manufacturing, production implementation on etch systems is just beginning to occur. Because gate patterning is so important to yield, gate linewidth control is viewed by many as the most critical application for integrated scatterometry on etch systems. IBM has implemented integrated scatterometry on its polysilicon gate etch systems to control gate linewidth for its 90 nm node SOI-based microprocessors in its 300 mm manufacturing facility. This paper shows the performance of the scatterometry system and the equipment-based APC system used to control the etch process. Some of the APC methodology is described, as well as sampling strategies, throughput considerations, and scatterometry models. Results reveal that the scatterometry measurements correlate well to CD-SEM measurements before and after etch, and also correlate to electrical measurements. Finally, the improvement in linewidth distribution following the implementation of feedforward and feedback control in full manufacturing is shown.

Downscaling of microchip production technology continually increases requirements to precision of process control, and demands improvement of critical dimension (CD) measurement and control tools. In this paper we discuss the application of in situ method of critical dimension measurement for improvement of photomask development process. For this purpose scatterometry and fitting methods are applied to the CD end point detector system (CD EPD). The CD EPD system is different from the commonly used EPD system, which mainly detects the thickness of remaining resist. Measurement can be performed directly during development process, thus there is an advantage of measurement time decreasing in comparison with the ex situ method. In situ method allows one to control development precisely, and gives possibility to meet the requirements of process control. For the application of scatterometry to the CD measurement, diffraction analysis is carried out by using of rigorous coupled wave analysis (RCWA). We calculate the library of reflected spectra with various CD and heights of the pattern. These spectra are used for fitting with an experimentally measured one to get the CD and height. To increase precision and speed of measurements interpolation of spectra and various fitting methods are used.

Scatterometry is gaining popularity in recent years as it shows itself as a worthy contender among existing metrology
systems. Scatterometry provides fast, accurate and precise profile information, which is valuable for in-line process
control in production environment. Scatterometry applications widely adopted in IC fabs include poly gate ADI and AEI,
and shallow trench isolation depth measurements. Recently, the mobility enhancement by compressive strain at
source/drain is reported which improves greatly PMOS Idsat. In this work, we extend the application domain of
scatterometry technology to two-dimensional recessed Si profile used in strained source and drain (SSD) structures.
Complexity of SSD structures measurement by scatterometry requires the use of many parameters in modeling, which
hinders a stable library setup. Our approach in circumventing this issue is to identify the most sensitive parameters first
and then further reduce the number of variables through an effective medium approximation (EMA). This paper will
discuss the preparation, experiments, and results of the scatterometry measurements. The extracted data have been
compared with transmission electron microscopy results. Good correlation in depth and profile are observed. In addition,
we have performed repeatability test and fault detection checks and the trend chart indicates that our methodology is
very robust for in-line process monitor.

The resolution of an optical microscope is limited by the optical wavelengths used. However, there is no fundamental limit to the sensitivity of a microscope to small differences in any of a feature's dimensions. That is, those limits are determined by such things as the sensitivity of the detector array, the quality of the optical system, and the stability of the light source. The potential for using this nearly unbounded sensitivity has sparked interest in extending optical microscopy to the characterization of sub-wavelength structures created by photolithography and using that characterization for process control. In this paper, an analysis of the imaging of a semiconductor grating structure with an optical microscope will be presented. The analysis includes the effects of partial coherence in the illumination system, aberrations of both the illumination and the collection optics, non-uniformities in the illumination, and polarization. It can thus model just about any illumination configuration imaginable, including Koehler illumination, focused (confocal) illumination, or dark-field illumination. By propagating Jones matrices throughout the system, polarization control at the back focal planes of both illumination and collection can be investigated. Given a detailed characterization of the microscope (including aberrations), images can be calculated and compared to real data, allowing details of the grating structure to be determined, in a manner similar to that found in scatterometry.

We have implemented back focal plane (conoscopic) imaging in an optical microscope that has also been modified to allow selection of the illumination angles and polarization at the sample, and collected back focal plane images of silicon on silicon grating scatterometry targets with varying line widths. Using a slit illumination mask, the zero-order diffraction versus angle for −60° to +60° incident angles at a given polarization was obtained from a single image. By using reference images taken on a flat silicon background, we correct the raw target images for illumination source inhomogeneities and polarization-dependent transmission of the optics, and convert them to reflectance versus angle data for s- and p-polarizations, similar to that obtained from angle-resolved grating scatterometry. As with conventional scatterometry, the target lines need not be resolved for the reflectance signature to show sensitivity to small changes in the grating parameters. For a series of 300 nm pitch targets with line widths from 150 nm to 157 nm, we demonstrate nanometer-level sensitivity to line width with good repeatability, using 546 nm illumination. Additionally, we demonstrate a technique for separating the zero order from higher order diffraction on targets with multiple diffraction orders, allowing collection of both zero and higher order diffraction versus angle from the back focal plane image. As conventional images can be easily collected on the same microscope, the method provides a powerful tool for combining imaging metrology with scatterometry for optical critical dimension measurements in semiconductors.

In this paper we present a unique method of evaluating the angular illumination homogeneity in an optical microscope
using the through-focus focus metric. A plot of the sum of the mean square slope throughout an optical image as the
target moves through the focus is defined as the through-focus focus metric. Using optical simulations we show that the
angular illumination inhomogeneity causes the through-focus focus metric value to proportionately increase at specific
focus positions. Based on this observation, we present an experimental method to measure angular illumination
homogeneity by evaluating the through-focus focus metric values on a grid across the field of view. Using the same
through-focus focus metric, we present a detailed study to measure critical dimensions with nanometer sensitivity with
the aid of simulations.

In our work we discuss two approaches of offline CD-SEM recipe creation for both OPC qualification wafers and the introduction of new products to the manufacturing line using the Applied Materials OPC Check and Offline Recipe Editor (ORE) applications. We evaluate the stability of the offline created recipes against process variations for different OPC test layouts as well as for production measurements on multiple lots per week and compare the results to the performance of recipes created directly on the tool. Further, the success rate of recipe creation is evaluated. All offline recipes have been generated in advance of wafer availability using GDS data. The offline created recipes have shown pattern recognition success rates of up to 98% and measurement success rates of up to 99% for line/space as well as for contact-hole (CH) measurements without manual assists during measurement. These success rates are in the same order of magnitude as the rates typically reached by an experienced CD-SEM engineer creating the recipes directly on the tool.

Interference microscopes are a widely used tool in many parts of production processes where exact information about
surfaces is needed. Users appreciate the high accuracy and resolution but often ignore the possible errors which cannot
be neglected in high precision metrology. Besides the measurement result, the uncertainty is the most important
information necessary to evaluate the quality of a measurement.
At the moment standardized calibration strategies for interference microscopes are missing. In order to receive
comparable results, standardized information about the uncertainty is needed. Thus, models for the determination of the
uncertainty of interference microscopes have to be developed. Therefore a simulation environment is being created,
which is able to simulate all processes occurring inside interference microscopes. In particular, influences of real
environments like laboratories or production processes are important. Furthermore user induced influences are
considered.
With the tool, based on a ray tracing procedure, systematic variations of the disturbing influences are possible and the
consequences on the interferogram and the measurement result can be simulated and analyzed. Hence it is possible to
manage the error influences of these complex systems as well as the generation of and an uncertainty budget. The
findings help to set up process orientated calibration strategies for interference microscopes to improve the comparability
und the traceability of measurements.

We are developing a transmission X-ray scattering platform capable of measuring the average cross section and line edge roughness in patterns ranging from 10 nm to 500 nm in width with sub-nm precision. Critical Dimension Small Angle X-ray Scattering (CD-SAXS) measures the diffraction of a collimated X-ray beam with sub-Angstrom wavelength from a repeating pattern, such as those in light scatterometry targets, to determine the pattern periodicity, line width, line height, and sidewall angle. Here, we present results from CD-SAXS with an emphasis on line edge roughness characterization. Line edge roughness measurements from CD-SAXS are compared with top-down scanning electron microscopy values and comparative definitions are discussed.

The National Institute of Standards and Technology (NIST) and SEMATECH are working to address traceability issues in semiconductor dimensional metrology. In semiconductor manufacturing, many of the measurements made in the fab are not traceable to the SI definition of the meter. This is because a greater emphasis is often placed on precision and tool matching than accuracy. Furthermore, the fast pace of development in the industry makes it difficult to introduce suitable traceable standard artifacts in a timely manner. To address this issue, NIST and SEMATECH implemented a critical dimension atomic force microscope (CD-AFM)-based reference measurement system (RMS). The system is calibrated for height, pitch, and width and has traceability to the SI definition of length in all three axes. Because the RMS is expected to function at a higher level of performance than inline tools, the level of characterization and handling of uncertainty sources is on a level usually seen for instruments at national measurement institutes. We have implemented a performance monitoring system to help us check the long-term stability of the calibrations. In this paper, we discuss progress in improving the uncertainty of the instrument and the details of our performance monitoring. We also present a method for accounting for some of the uncertainty due to the higher order tip effects.

The National Institute of Standards and Technology (NIST) has a multifaceted program in atomic force microscope
(AFM) dimensional metrology. There are two major instruments being used for traceable AFM measurements at NIST.
The first is a custom in-house metrology AFM, called the calibrated AFM (C-AFM), and the second instrument is a
commercial critical dimension AFM (CD-AFM). The C-AFM has displacement metrology for all three axes traceable
to the 633 nm wavelength of the Iodine-stabilized He-Ne laser. In the current generation of this system, the relative
standard uncertainty of pitch and step height measurements is approximately 1.0 x 10-3 for pitches at the micrometer
scale and step heights at the 100 nm scale, as supported by several international comparisons. We expect to surpass this
performance level soon. Since the CD-AFM has the capability of measuring vertical sidewalls, it complements the
C-AFM. Although it does not have intrinsic traceability, it can be calibrated using standards measured on other
instruments - such as the C-AFM, and we have developed uncertainty budgets for pitch, height, and linewidth
measurements using this instrument. We use the CD-AFM primarily for linewidth measurements of near-vertical
structures. At present, the relative standard uncertainties are approximately 0.2% for pitch measurements and 0.4% for
step height measurements. As a result of the NIST single crystal critical dimension reference material (SCCDRM)
project, it is possible to calibrate CD-AFM tip width with a 1 nm standard uncertainty. We are now using the CD-AFM
to support the next generation of the SCCDRM project. In prototypes, we have observed features with widths as low as
20 nm and having uniformity at the 1 nm level.

The need for absolute accuracy is increasing as semiconductor-manufacturing technologies advance to sub-65nm
nodes, since device sizes are reducing to sub-50nm but offsets ranging from 5nm to 20nm are often encountered. While
TEM is well-recognized as the most accurate CD metrology, direct comparison between the TEM data and in-line CD
data might be misleading sometimes due to different statistical sampling and interferences from sidewall roughness. In
this work we explore the capability of CD-AFM as an accurate in-line CD reference metrology. Being a member of
scanning profiling metrology, CD-AFM has the advantages of avoiding e-beam damage and minimum sample damage
induced CD changes, in addition to the capability of more statistical sampling than typical cross section metrologies.
While AFM has already gained its reputation on the accuracy of depth measurement, not much data was reported on the
accuracy of CD-AFM for CD measurement. Our main focus here is to prove the accuracy of CD-AFM and show its
measuring capability for semiconductor related materials and patterns. In addition to the typical precision check, we
spent an intensive effort on examining the bias performance of this CD metrology, which is defined as the difference
between CD-AFM data and the best-known CD value of the prepared samples. We first examine line edge roughness
(LER) behavior for line patterns of various materials, including polysilicon, photoresist, and a porous low k material.
Based on the LER characteristics of each patterning, a method is proposed to reduce its influence on CD measurement.
Application of our method to a VLSI nanoCD standard is then performed, and agreement of less than 1nm bias is
achieved between the CD-AFM data and the standard's value. With very careful sample preparations and TEM tool
calibration, we also obtained excellent correlation between CD-AFM and TEM for poly-CDs ranging from 70nm to
400nm. CD measurements of poly ADI and low k trenches are also reported, and both show good correlation to in-line
CD-SEM results.

We fabricate three kinds of carbon nanotube (CNT) probes to be employed in critical dimension atomic force microscope (CD-AFM). Despite unique advantages in its size and hardness, use of nanotube tip has been limited due to the lack of reproducible control of CNT orientation and its shape. We proposed that CNT alignment issues can be addressed based on the ion beam bending process, where a CNT free-standing on the apex of an AFM tip aligns itself in parallel to the FIB direction so that its free end is directed toward the ion source, with no external electric or magnetic field involved. The process allowed us to embody cylindrical probes of CNT diameters, and subsequently two additional types of CNT tips. One is ball-ended CNT tip which has, at the end of CNT tip, side-protrusions of tungsten/amorphous carbon in the horizontal dithering direction. The other is 'bent' CNT tip where the end of CNT is bent to a side direction. Using the former type of CNT tip, both sides of trench/line sidewall can be measured except for bottom corners, while the corners can be reached with the latter type, but the only one sidewall can be measured at a tip setting. The three types of tips appear to satisfy the requirements in both the size and accessibility to the re-entrant sidewall, and are awaiting actual test in CD-AFM.

Downscaling of semiconductor fabrication technology requires continuous improvements in production process control. To ensure tool-to-tool matching and compatibility of critical dimension-scanning electron microscopy (CD-SEM) measurements to measurements from other technologies, such as optical CD, or from other fabrication entities, accuracy has become a much more important factor than in the past. CD-SEM measurements have always yielded a bias, which can be quite significant, but also typically neglected since it does not vary much over a process window. However, the standard CD-SEM metrology approach to algorithm accuracy (which can be formulated "Accuracy= Precision + Calibration") does not work for small features; i.e., the measurement bias is not constant for small features. Limitations of the standard measurement algorithm, based on the treatment of the singular point of the waveform for CDs smaller than 30 nm and the new model library-based approach, were considered. The implementation of reliable measurement algorithms for features at the 45 nm node and beyond requires development of more sophisticated approaches to SEM signal treatment. A three-dimensional (3-D) physical model that takes into account physical processes related to the beam interaction with material is considered. Reliability of the new approach is verified using Monte-Carlo SEM simulation and real SEM images as compared to reference measurements; total measurement uncertainty (TMU) is improved with the better models. The relation of the developed method to the standard SEM measurement algorithm and model-based approach is also considered.

It is important to be able to quantify the imaging performance of CD-SEMs for such purposes as verifying the specification, rechecking after a routine maintenance, or for tool matching. To perform tests such as these it is necessary to have both appropriate software for image analysis and suitable test samples. A package of 2-D Fourier transform and analysis software, designed as a plug-in for the shareware IMAGE-Java program, has been developed and is freely available on line. The requirement for a reproducible and well characterized sample has been met by using direct-write electron beam lithography to fabricate suitable Fresnel zone plate structures.

Managing a fleet of metrology tools is becoming an extremely daunting task. This is especially so in manufacturing lines where it is not unusual to have many tools in the fleet and a very large mix of product and technologies. It is this large mix of product and technologies which pushes the number of recipes created into the thousands. Combine the large number of recipes with a poorly calibrated, monitoring and managed fleet of tools and productivity can be negatively impacted many ways. In this paper, these productivity detractors are explained in more detail to help understand the numerous ways a fleet of metrology tools can negatively impact the productivity of the manufacturing and development lines. In the pursuit of reducing metrology tool induced productivity detractors, the concept of metrology tool fleet management is presented. Categories of fleet management are also introduced along with a comprehensive discussion of requirements. It is hoped that this discussion of requirements and solutions concerning the metrology tool fleet management concept will launch efforts in coordinating the comprehensive solutions needed between suppliers and tool owners. Some recommendations are made regarding long term solutions to needs with respect to integrating fleet management into manufacturing fab systems.

As Critical Dimensions (CD) for semiconductor devices shrink to few tens of nanometers, the Line Edge Roughness (LER) or Line Width Roughness (LWR) becomes a critical issue because it can degrade resolution and linewidth accuracy [1] and causes fluctuations of transistors performances [2-8]. LER is currently calculated with top-view SEM images [9-10]. However, these values do not take into account the sidewall variation along the height of the feature (feature's geometry). Therefore, roughness information might get lost. In addition several issues impact the roughness measurement accuracy for example: blooming effect, resist slimming, algorithm used (...). Alternatively, the last generation of CD-AFM, [11,12] has been developed to measure pattern in three dimensions with a dynamic repeatability of around 1nm (3σ). By smartly tuning AFM parameters and choosing suitable AFM tips, CD, LER and LWR of both isolated lines and dense line are measured as a function of the position on the feature. This metrology technique can be used on a large range of materials: photoresist, silicon oxide and poly-silicon, without any pattern damage. Hence, it enables to fully characterize the evolution of the sidewall roughness after each technological step of a typical device fabrication.
In this paper we will compare CD-SEM and CD-AFM techniques as a mean to measure LER and LWR on real resist structures and hard-mask structures (SiO2) that show significant variations due to different chemical compositions or processes conditions. In order to well understand the limitations of each technique, we have generated and mixed various roughness amplitudes with various feature's shapes (different top rounding, sidewall angle...). Depending on the technique and feature's shapes, the roughness measurements trends are different which can lead to wrong process tuning and therefore degrade device performances at the end.

Although various approaches can be used to quantify linewidth roughness (LWR), it is essential to determine it with
sufficient confidence. Statistical fluctuations inherent to the measurement process are making correlation between
performance and LWR challenging. To reduce uncertainty, line width variations and LWR need to be monitored online
in full automation by CDSEM. In this paper, we use this methodology to investigate the effect of LWR on
electrical performance for various device applications. Our results quantify the impact of LWR by using matching
techniques.

Line edge and line width roughness (LER/LWR) is commonly estimated by standard deviation sigma. Since the
standard deviation is a function of sample line length L, the behavior of sigma(L) curve is characterized by the
correlation length and roughness exponent. In this paper, an efficient and practical macro LER/LWR analysis is
implemented by characterizing an arbitrary array of similar features within a single CD-SEM image. A large amount of
statistical data is saved from a single scan image. As a result, it reports full LER/LWR information including correlation
length, roughness exponent, sigma at infinite line length, and power spectrum. Off-line, in-house software is developed
for automated investigation, and it is successfully evaluated against various patterns. Starting with the detailed
description of the algorithm, experimental results are discussed.

The lithographic challenges of printing at low-k1 for 65 nm logic technologies have been well-documented (1,2). Heavy utilization of model-based optical proximity correction (OPC) and reticle enhancement technologies (RET) are the course of record for 65 nm logic nodes and below. Within the SRAM cells, often more dimensionally constrained than random logic, characterization of the nominal gate linewidth and linewidth variation is critical to ensure cell performance and stability. In this paper, we present the use of the linewidth roughness analysis package of a commercially-available CD SEM to extract low-spatial frequency information in order to characterize effects of OPC, substrate topography, process variations, and RETs. The SEM-based characterization of across-device linewidth variation is analyzed statistically to extract the information necessary to set device processing conditions and to make layout corrections consistent with producing the least possible channel length variation along the active device.

An overview of the challenges encountered in imaging device-sized features using optical techniques recently developed in
our laboratories is presented in this paper. We have developed a set of techniques we refer to as scatterfield microscopy
which allows us to engineer the illumination in combination with appropriately designed metrology targets. The techniques
have previously been applied to samples with sub-50 nm sized features having pitches larger than the conventional Rayleigh
resolution criterion which results in images having edge contrast and elements of conventional imaging. In this paper we
extend these methods to targets composed of features much denser than the conventional Rayleigh resolution criterion. For
these applications, a new approach is presented which uses a combination of zero order optical response and edge-based
imaging. The approach is, however, more general and a series of analyses based on theoretical methods is presented. This
analysis gives a direct measure of the ultimate size and density of features which can be measured with these techniques and
addresses what measurement resolution can be obtained. We present several experimental results, optical simulations using
different electromagnetic scattering packages, and statistical analyses to evaluate the ultimate sensitivity and extensibility of
these techniques.

A novel approach to overlay metrology, called Blossom, maximizes the number of layers measurable within a single optical field of view (FOV). As chip processing proceeds, each layer contributes a set of at least four marks, arranged symmetrically on concentric circles, to create a 90° rotationally invariant array of marks that "blossoms" to fill the FOV. Radial symmetry about the target center is maintained at each layer to minimize susceptibility to metrology lens aberrations. Overlay combinations among detectable marks within the target can be measured simultaneously. In the described embodiment, 28 distinct layers are represented within a 50μm square FOV. Thus, all the layers of a functional chip can be represented in a single target. Blossom achieves several benefits relative to overlay methods currently in practice:
* Compression (>30X) of area required for overlay targets. * Nullification of within-target proximity effects. * Suppression of optical mark fidelity (OMF) errors. * Reduction of sensitivity to across-target detection noise.* Elimination of overlay error random walk among layers. * Reference mark redundancy for detection flexibility and robustness. * Integration of multi-layer and within-layer overlay control schema. * Simplification of overlay recipe creation and management. * Capture and visualization of overlay performance through the entire chip fabrication.
Blossom results from 65-nm products in manufacturing are described.

The National Institute of Standards and Technology (NIST) and The International Sematech Manufacturing Initiative
(ISMI) have been involved in a project to evaluate the accuracy of optical overlay measurements in the presence of
measurement target asymmetries created by typical wafer processing. The ultimate goal of this project is to produce a
method of calibrating optical overlay measurements on typical logic and memory production stacks. A method of
performing accurate CD-SEM and CD-AFM overlay measurements is first presented. These measurements are then
compared to optical overlay measurements of the same structures to assess the accuracy of the optical measurements.
Novel image rotation tests were also performed on these structures to develop a method to decouple errors from
metrology target asymmetries and measurement system optical asymmetries.

Overlay tool matching and accuracy issues are quickly reaching a comparable complexity to that of critical
dimensional metrology. While both issues warrant serious investigation, this paper deals with the matching
issues associated with overlay tools. Overlay tools need to run and measure as if they are a single tool -
they need to act as one. In this paper a matching methodology is used to assess a set of overlay tools in a
multiple of overlay applications. The methodology proposed in a prior2 SPIE paper is applied here to a
fleet of two generations of overlay tools to detect measurement problems not seen with convention
Statistical Process Control techniques. Four studies were used to examine the benefits of this matching
methodology for this fleet of overlay tools. The first study was a matching assessment study. The second
study was a hardware comparison between generations of tools. The third study was a measurement
strategy comparison. The final study was a long term matching exercise where one example of a traditional
long term monitoring strategy was compared to a new long term monitoring strategy. It is shown that this
new tool matching method can be effectively applied to overlay metrology.

In this publication, the contributors to in-field overlay metrology uncertainty have been parsed and quantified on a back
end process and compared with results from a previous front end study1. Particular focus is placed on the unmodeled
systematics, i.e. the components which contribute to residuals in a linear model after removal of random errors. These
are the contributors which are often the most challenging to quantify and are suspected to be significant in the model
residuals. The results show that in both back and front end processes, the unmodeled systematics are the dominant
residual contributor, accounting for 60 to 70% of the variance, even when subsequent exposures are on the same
scanner. A higher order overlay model analysis demonstrates that this element of the residuals can be further dissected
into correctible and non-correctible high order systematics. A preliminary sampling analysis demonstrates a major
opportunity to improve the accuracy of lot dispositioning parameters by transitioning to denser sample plans compared
with standard practices. Field stability is defined as a metric to quantify the field to field variability of the intrafield
correctibles.

The feasibility of measuring overlay using small targets has been demonstrated in an earlier paper1. If the target is small ("smallness" being relative to the resolution of the imaging tool) then only the symmetry of its image changes with overlay offset. For our purposes the targets must be less than 5μm across, but ideally much smaller, so that they can be positioned within the active areas of real devices. These targets allow overlay variation to be tested in ways that are not possible using larger conventional target designs. In this paper we describe continued development of this technology.
In our previous experimental work the targets were limited to relatively large sizes (3x3μm) by the available process tools. In this paper we report experimental results from smaller targets (down to 1x1μm) fabricated using an e-beam writer.
We compare experimental results for the change of image asymmetry of these targets with overlay offset and with modeled simulations. The image of the targets depends on film properties and their design should be optimized to provide the maximum variation of image symmetry with overlay offset. Implementation of this technology on product wafers will be simplified by using an image model to optimize the target design for specific process layers. Our results show the necessary good agreement between experimental data and the model.
The determination of asymmetry from the images of targets as small as 1μm allows the measurement of overlay with total measurement uncertainty as low as 2nm.

Self-interferometric based electrical test patterns are proposed for highly-sensitive systematic projection printing field mapping and production wafer monitoring. The strategy is to adapt the high sensitivity of Pattern-And-Interferometric-Probe monitors for aberrations to electrical testing by means of short loop and within process flow process step sequences. For this application the measurement of the presence or absence of contact sized hole in the resist in a focus-exposure matrix would be replaced by the creation of an electrical open or short in a nominally conducting minimum sized feature. Both double exposure and single exposure test patterns are presented. Detailed image simulations have been used to demonstrate the principles, create layout designs, characterize performance and compare the enhanced sensitivity relative to typical circuit layout features. Sensitivities of 8% of the clear field per 0.01λ RMS have been verified through simulation of the electrical test pattern. Layouts of these patterns have been placed on multiple-student PSM test reticles for future experimental validation.

We have developed a new measurement techniques employing digital probing with AFM (Atomic Force
Microscope) that can examine sidewalls of fine patterns. This new technique employs digital probing operations, such
sample-tilt step-in operation, tilt-step-in operation with a sharpened tilted tip, and multi-angle step-in operation with a
flared tip. First, we examined the validity of digital probing operation using a carbon nanotube (CNT) tip, showing the
measurement repeatability of approximately 1 nm (3σ) on a fine trench pattern with 50 nm width and 300 nm depth.
After the slip calculation between the tilted-tip and the sidewall for the new sidewall measurement technique, we
measured a perpendicular reference sidewall with two types of operations, namely, tilt-step-in and multi-angle step-in
operations. We then obtained 3D images of ArF resist patterns that involved measurement of sidewall surface
roughness. Finally, we demonstrated a possibility of extending the technique for measuring denser trench patterns by
using sample-tilt method and a tilted CNT tip.

Full wafer dual beam FIB-SEM systems have received a lot of industrial interest in the last years and by now are operational in several 200mm and 300mm fabs. These tools offer a 3D-physical characterization capability of defects and device structures and as such allow for more rapid yield learning and increased process control. Moreover, if SEM resolution is insufficient to reveal defect origin or the necessary process details, it is now also possible to prepare TEM samples using a controlled, easy to learn in-situ process and to efficiently continue the characterization with a high resolution TEM inspection. Thanks to latest hardware developments and the high degree of automation of this TEM sample preparation process, wafers no longer need to be broken and remain essentially free from contamination. Hence, the TEM lamella process can be considered as non-destructive and wafers may continue the fabrication process flow.
In this paper we examine the SEM and TEM application capabilities offered by in-line dual beam systems. To qualify the wafer return strategy, the particle contamination generated by the system hardware as well as the process-induced contamination have been investigated. The particle levels measured are fully acceptable to adopt the wafer return strategy. Ga-contamination does exist but is sufficiently low and localized so that the wafer return strategy can be applied safely in the back-end of line process. Yield analysis has confirmed that there is no measurable impact on device yield. Although yet to be proven for the frond-end of line processes, the wafer return strategy has been demonstrated as a valuable one already in the backend of line processes. The as developed non-destructive 3-D SEM-TEM characterization capability does offer value added data that allow to determine the root cause of critical process defects in almost real-time and this for both standard (SEM) and more advanced (TEM) technologies.

The ability to measure profiles of high-aspect structures is important for the development of new integrated circuit fabrication processes. Delays in the development learning cycle frequently occur due to turn-around time associated with the logistics of off-line laboratory sectioning and analysis. Sample preparation techniques associated with existing cross-sectional imaging methodologies also necessitate destruction of the whole sample. Focused ion beam (FIB) sectioning has recently been used in conjunction with SEM imaging for profile acquisition inside the fabrication facility. However, full acceptance of FIB inside the cleanroom processing area has been slowed by concerns over the threat of Gallium contamination arising from the ion beam. There also exists uncertainty in the fidelity of FIB-based profile acquisition, due to the various artifacts associated with the ion beam mill sectioning process. In this article, the application of and difficulties associated with electron beam induced processing (etch and deposition) for obtaining feature profile shape information on masks and wafers will be described. Purely chemical reactions with much higher material selectivity and less damage have been employed to obtain microstructure profile information using various scanned electron beam tools. The superiority of electron beam induced deposition (compared to FIB) for passivation and replication of the surface topography prior to etching has also been demonstrated. In addition to electron and ion beam based sectioning, a novel atomic force microscope based nano-machining process has been developed for three-dimensional tomographic imaging of high-aspect features on masks and wafers. Images and profiles of feature regions not accessible with FIB/SEM or CDAFM methodologies will be presented. The challenges encountered for practical implementation of this new, non-beam-based, approach to sectioning will also be discussed. Advantages of this approach are: immunity to maximum aspect ratio limitations, superior lateral spatial sampling in X and Y, and no reliance on high-aspect probes for imaging. Therefore, tip-shape issues associated with currently incumbent CDAFM methodologies can be avoided altogether.

As we move forward to the 45 and 32nm node, MuGFET's (Multi-Gate Field-Effect Transistor) are considered more
and more as a necessary alternative to keep pace with Moore's Law. If proven manufacturable, MuGFET's could
eventually replace conventional CMOS transistors within a few years. The ability to perform proper and extensive
metrology in a production environment is then essential. We investigate here some of the requirements of MuGFET
metrology. Accuracy and line width roughness (LWR) metrology will play an essential role, because the small
dimension of the features involved. 3D metrology is required when dealing with non-planar devices. Sophisticated
check of optical proximity correction (OPC) is needed in order to ensure that the design is respected. We propose here
some possible solutions to address the needs of MuGFET metrology in a production-worthy fashion. A procedure to
calibrate CDSEM to TEM for accuracy is developed. We performed LWR metrology of fins in a fully automated way
by using CDSEM, while the 3D information is obtained by means of scatterometry. Finally, we will discuss the
application of design-based metrology (DBM) to MuGFET OPC validation.

CD measurement bias has long been reported as an inherent artifact of CD-SEM measurements. However, as feature dimensions decrease and line-to-space ratios increase, the magnitude of previously acceptable levels of measurement bias requires re-examination. Traditional attempts at correcting the bias has entailed slow, destructive or laborious techniques, such as comparisons of top-down CD-SEM measurements using standard algorithms with cross-section information, or correlating top-down data with complex tilted images.
In this paper we expand the application of Critical Shape Metrology - a physics-based metrology technique for 3-D profile acquisition based on CD-SEM, to minimizing CD bias in real-time for a variety of feature dimensions and profiles. Samples used for the experiments were fabricated through E-Beam lithography and 193 lithography with a wide variation of sidewall angles and CDs, so that the measurement bias could be assessed over a sufficiently large range of patterned shapes. Reference measurements were performed using a CD-AFM and FIB-SEM

As the trends in integrated circuit fabrication follow Moore's Law to smaller feature sizes, one trend seen in lithographic technology is the continually increasing use of optical enhancements such as Optical Proximity Correction (OPC). Small size perturbations are designed into the nominal feature shapes on the reticle such that the intended shape is printed. Verifying the success of OPC is critical to ramp-up and production of new process technologies. CD-SEMs are imaging tools which are capable of measuring feature sizes in any part of a chip, either in a test structure or within a circuit. A new trend in CD-SEM utilization is the implementation of automated recipe generation of complex CD-SEM recipes. The DesignGauge system uses design-to-SEM recipe creation and data collection. Once the recipe creation flow is implemented, the task of recipe creation can be accomplished within minutes. These applications enable a CD-SEM to be utilized to collect data for very complex OPC CD-SEM recipe runs which measure many different unique linewidths, spaces, and pattern placements within a circuit to check OPC success and lithographic fidelity. The data collection can provide accurate data results that can be utilized for comparing achieved feature measurements to nominal values from the design layout. This new application adds much value to the CD-SEM compared to other technologies such as OCD, as it completes the evaluation of in-circuit behavior to test structures in a scribe lane, something OCD currently cannot do. The present work evaluates the capabilities of DesignGauge, which is available for the latest-generation Hitachi S-9380II CD-SEMs. The evaluation includes rigorous tests of navigation, pattern recognition success rates, SEM image placement, throughput of recipe creation and recipe execution.

This study demonstrates the MPPC (Multiple Parameters Profile Characterization) measurement method utilizing ArF photo resist patterns. MPPC is a technique for estimating the three dimensional profile of patterns which are imaged and measured on the CD-SEM (critical dimension scanning electron microscope). MPPC utilizes the secondary electron signal to calculate several indices including top CD, peak CD, top rounding, bottom footing, etc.
This primary focused of this study is to understand the variations in pattern profile caused by changes in exposure condition. The results demonstrate the ability to extract pattern profile shape information by MPPC measurement that could not otherwise be detected by a conventional bottom CD measurement method. Furthermore, the results were compared to cross sectional images collected by STEM (scanning transmission electron microscope) to verify the accuracy of the MPPC technique. The peak CD results accurately estimate the pattern width when the sidewall angle of the feature is nearly vertical. Additionally, line edge roughness (LER) caused by pattern profile variations was evaluated utilizing MPPC. The results suggest that MPPC may be utilized to evaluate the roughness over the entire profile.

Most semiconductor manufacturers expect 193nm immersion lithography to remain the dominant
patterning technology through the 32nm technology node. If this remains the case, the interaction
of more complex designs with shrinking process windows will severely limiting parametric yield.
The industry is responding with strategies based upon design for manufacturability (DFM) and
multi-variate advanced process control (APC). The primary goal of DFM is to enlarge the process
yield window, while the primary goal of APC is to keep the manufacturing process in that yield
window. In this work, we discuss new and innovative process metrics, including simulation-based
virtual metrology, that will be needed for yield at the 32nm technology node.

Optical proximity correction (OPC) plays a vital role in the lithography process of cutting-edge IC fabrication. The
quality of lithography models used in OPC is fundamental to the final performance of the OPC in production.
Traditionally, two-dimensional proximity features such as line-end, bar-to-bar or bar-to-line were only partially
characterized because of the difficulty in transferring the SEM information into the OPC model building process. A
new methodology of edge placement error (EPE) measurement using CD-SEM is proposed as part of an OPC model
building and process/OPC qualification flow.
It is not easy to generate EPE measurements because of the inherent need to overlay the design and the SEM in order to
quantify EPE. The quality of the EPE measurement depends on both the accuracy of the SEM image scan rotation and
magnification, but also on the accuracy of pattern matching between the design layout pattern and the realized pattern
(wafer). These problems do not exist in simulation, but model calibration requires a direct comparison between
simulation and measurement. Measuring EPE effectively brings the measurement information into the realm of the
design. Hitachi High-Technologies has developed a "fully automated EPE measurement function" based on design
layout and detected edges of SEM image as a solution to this issue.
This study shows several practical evaluation results using the automated EPE measurement function. The applications
that will be discussed are as follows.
1) Design based classification of edges and subsequent quantification of SEM EPE for many types of edge
arrangement and orientation. In this study, we will examine line-end-adjacent, line-end, corner, and other
critical gate edges.
2) SEM image based classification of EPE fliers as a new population of errors.
3) Comparison between the detected edge of the feature within the SEM image and a polygon shape generated by
lithography simulation to determine the quality of the simulation.
4) Conversion of the SEM image edge contour into an OASIS file and construction of a process variability band to
quantify CD variability for all 2D contexts in a SEM image.

Measurement bias is a central concept of critical dimension (CD) metrology. Bias is a complex function of sample, tool and time. Bias variation defines the total measurement uncertainty (TMU). TMU is a measure of metrology quality. Precision (or bias variation with time) is only a part of TMU. Often tool-to-tool and sample-to-sample components of bias variation exceed precision. To measure sample-to-sample bias variation, knowledge of the reference CD value for the samples is required. Since bias is sample specific, the technology representative set of calibrated samples has to be created. The described approach has been implemented for a comprehensive evaluation of optical scatterometry (OS) to determine readiness of OS for the 65 nm technology production. The tests covered nine OS applications representative of the technology. The testing revealed that OS metrology is mostly ready to support 65 nm technology production. Both spectral ellipsometry and angular reflectometry OS compete closely on all applications. OS demonstrates acceptable averaged bias for CD and sidewall angle for most applications. Correlation of OS to other metrologies is usually satisfactory. At the same time some problems have been observed. The majority of the tested applications show poor linearity for some measured parameters. Cross-correlation between parameters is usually the cause. OS has trouble to meet the semiconductor industry's tight fleet precision requirements. For all applications, OS tool matching is a major component of fleet precision. The evaluation also exposed some general CD metrology challenges. With accuracy allowance in a sub-nanometer range, it is difficult to find an adequate reference technique to verify and calibrate OS models. Atomic force microscopy (AFM) has been chosen as a reference technique during this evaluation, but it has limitations. Precision, sidewall profile resolution and tip finite dimensions are some of the AFM limitations. OS fleet TMU for many applications is unacceptably high. Further work is needed to better understand the impact of reference data uncertainty on these alarming results. It is clear that to achieve a desired sub-nanometer agreement between reference and OS data, one must pay scrupulous attention to every detail of the experiment.

We used full Mueller polarimetry in conical diffraction geometries to characterize 1D holographic optical gratings etched in bulk silica with a patterned photoresist layer. We studied four different samples corresponding to different stages of etching, with a Mueller polarimeter based on ferroelectric liquid crystals, operated in the visible. Two samples were also characterized by standard spectroscopic ellipsometry (SE) in the UV-VIS range (300-800 nm). The measured spectra were fitted with a Rigorous Coupled Wave Analysis code with different models of grating profiles. With the Mueller spectra the model adequacy could be assessed from the stability of the optimal values of the fitting parameters when the azimuthal angle was varied. The conclusions were found to be in agreement with AFM images of the sample, while the fits of the SE data were too poor to provide any information in this respect. A key issue for process control is resist-silica interface localization, a difficult task due to the low index contrast for these two materials. In fact, strong correlation occurs between resist and silica thicknesses when SE spectra, taken in the usual planar diffraction geometry, are fitted. Our approach clearly reduces such parameter correlations, leading to a reliable localization of this interface.

In this work we demonstrate the application of a unique type of scatterometer to the measurement of advanced geometry semiconductor devices. Known as a dome scatterometer, the technology is capable of measuring multiple diffraction orders at multiple angles of incidence, thereby providing a means for gathering a large amount of scatterometry data in a short period of time. Dome scatterometers are also capable of measuring light scattered as a function of both theta (zenith) and phi (azimuth) incident angles, and for a variety of polarimetric configurations, all of which provide more information about the scattering structure and contribute to improved sensitivity. A dome scatterometer can also measure a grating structure regardless of its orientation, so that horizontal and vertical structures can be measured without the need for a wafer rotation.
Prior to initiating measurements with the dome scatterometer, we surveyed available laser sources and modeled their expected sensitivity theoretically to determine the best illumination wavelength for the applications we intended to study. Our findings demonstrated that a wavelength around 405nm is suitable for a wide variety of applications, but provides the best improved sensitivity for etch applications. We then modified our dome scatterometry optical system to accommodate a Using a 405nm laser, and performed measurements were performed on several types of grating structures. Examples of the excellent signal-to-noise ratio of dome scatterometry measurements across these applications are provided. Measurement data from applications including patterned photoresist, patterned poly lines and back-end trench interconnect structures will be presented. Comparisons to metrology tools such as AFM and CD-SEM will be made. Precision data will also be summarized, and the extendibility of dome scatterometry will be discussed.