Many different advanced devices and design layers currently employ double patterning technology (DPT) as a means to overcome lithographic and OPC limitations at low k1 values. Certainly device layers with k1 value below 0.25 require DPT or other pitch splitting methodologies. DPT has also been used to improve patterning of certain device layers with k1 values slightly above 0.25, due to the difficulty of achieving sufficient pattern fidelity with only a single exposure. Unfortunately, this broad adoption of DPT also came with a significant increase in patterning process cost. In this paper, we discuss the development of a single patterning technology process using an integrated Inverse Lithography Technology (ILT) flow for mask synthesis. A single pattering technology flow will reduce the manufacturing cost for a k1 &gt; 0.25 full chip random contact layer in a memory device by replacing the more expensive DPT process with ILT flow, while also maintaining good lithographic production quality and manufacturable OPC/RET production metrics.<p> </p>This new integrated flow consists of applying ILT to the difficult core region and traditional rule-based assist features (RBAFs) with OPC to the peripheral region of a DRAM contact layer. Comparisons of wafer results between the ILT process and the non-ILT process showed the lithographic benefits of ILT and its ability to enable a robust single patterning process for this low-k1 device layer. Advanced modeling with a negative tone develop (NTD) process achieved the accuracy levels needed for ILT to control feature shapes through dose and focus. Details of these afore mentioned results will be described in the paper.

Current patterning technology for manufacturing memory devices is being developed towards enabling high density
and high resolution capability. However, as applying high resolution technology results in decreased process margin,
OPC has to compensate for such effect. Since the process margin is decreased greatly for contact layers, technologies
such as RBAF (Rule-Based Assist Feature), MBAF (Model-Based Assist Feature), and ILT (Inverse Lithography
Technology) are considered to maximize the process margin [1, 2, 3]. Although ILT is the best solution in terms of
process margin, it has several disadvantages such as long OPC run-time, mask complexity, and unstable mask fidelity.
MBAF method is a good compromise for more advanced techniques mitigating those risks (but not eliminating it),
which is why it is often used for contact layers.
<br/>
<br/>
When setting up the rules for RBAF, not all patterns are considered. Thus, applying RBAF for contact layers may
result in decreased process margin for certain patterns since the same rule is applied globally. MBAF, on the other hand,
can maximize the process margin for various patterns as it generates AF (Assist Feature) to locations that maximize the
margin for the patterns considered. However, MBAF method is very sensitive to even a slight change of a target, which
influences the locations of the AF. This leads to generating different OPCed CD of the main features, even for those that
should not be affected by the changed target. Once the OPCed CD is changed, it is impossible to obtain the same mask
CD even when the mask is manufactured with the same method. If this case occurs during mass production, the entire
layer needs to be confirmed after each revision which leads to unnecessary time loss.
<br/>
<br/>
In this paper, we suggest a new OPC method to prevent this issue. With this flow, OPCed shapes of unchanged
patterns remain the same while only the changed targets are OPCed and replaced into the corresponding location, while
the boundaries between those regions are corrected using a model based boundary healing. This method can reduce the
overall OPCTAT as well as the time spent in verifying the entire layout after each revision. Details of these results will
be described in this paper. After further studies, this flow can also be applied to ILT.

The standard method for defect disposition and verification of repair success in the mask shop is through the utilization of the aerial imaging platform, AIMS<sup>TM</sup>. The CD (Critical Dimension) deviation of the defective or repaired region as well as the pattern shift can be calculated by comparing the measured aerial images of this region to that of a reference. Through this analysis it can be determined if the defect or repaired region will be printed on the wafer under the illumination conditions of the scanner. The analysis of the measured aerial images from the AIMS<sup>TM</sup> are commonly performed manually using the analysis software available on the system or with the help of an analysis software called RV (Repair Verification). Because the process is manual, it is not standardized and is subject to operator variations. This method of manual aerial image analysis is time consuming, dependent on the skill level of the operator and significantly contributes to the overall mask manufacturing process flow. AutoAnalysis (AA), the first application available for the FAVOR® platform, provides fully automated analysis of AIMS<sup>TM</sup> aerial images [1] and runs in parallel to the measurement of the aerial images. In this paper, we investigate the initial AutoAnalysis performance compared to the conventional method using RV and its application to a production environment. The evaluation is based on the defect CD of three pattern types: contact holes, dense line and spaces and peripheral structure. The defect analysis results for different patterns and illumination conditions will be correlated and challenges in transitioning to the new approach will be discussed.

As the scales of the semiconductor devices continue to shrink, accurate measurement and control of the overlay have been emphasized for securing more overlay margin. Conventional overlay analysis methods are based on the optical measurement of the overlay mark. However, the overlay data obtained from these optical methods cannot represent the exact misregistration between two layers at the circuit level. The overlay mismatch may arise from the size or pitch difference between the overlay mark and the real pattern. Pattern distortion, caused by CMP or etching, could be a source of the overlay mismatch as well. Another issue is the overlay variation in the real circuit pattern which varies depending on its location. The optical overlay measurement methods, such as IBO and DBO that use overlay mark on the scribeline, are not capable of defining the exact overlay values of the real circuit. Therefore, the overlay values of the real circuit need to be extracted to integrate the semiconductor device properly. The circuit level overlay measurement using CDSEM is time-consuming in extracting enough data to indicate overall trend of the chip. However DBM tool is able to derive sufficient data to display overlay tendency of the real circuit region with high repeatability. An E-beam based DBM(Design Based Metrology) tool can be an alternative overlay measurement method.<p> </p> In this paper, we are going to certify that the overlay values extracted from optical measurement cannot represent the circuit level overlay values. We will also demonstrate the possibility to correct misregistration between two layers using the overlay data obtained from the DBM system.

Aberration sensitivity matching between overlay metrology targets and the device cell pattern has become a common requirement on the latest DRAM process nodes. While the extreme illumination modes used demand that the delta in aberration sensitivity must be optimized, it is effectively limited by the ability to print an optimum target that will meet detectability and accuracy requirements. Therefore, advanced OPC techniques are required to ensure printability and have optimal detectability performance while maintaining sufficient process window to avoid patterning or defectivity issues.<p> </p> In this paper, we have compared various mark designs with real cell in terms of aberration sensitivity under the specific illumination condition. The specific illumination model was used for aberration sensitivity simulation while varying mask tones and target designs. Then, diffraction based simulation was conducted to analyze the effect of aberration sensitivity on the actual overlay values. The simulation results were confirmed by comparing the OL results obtained by diffraction based metrology with the cell level OL values obtained using Critical Dimension Scanning Electron Microscope.

As technology node has been shrinking for bit growth, various technologies have been developed for high productivity. Nevertheless, lithography technology is close to its limit. In order to overcome these limits, EUV(Extreme Ultraviolet Lithography) and DSA(Directed Self-Assembly) are being developed, but there still exists problems for mass production. Currently, all lithography technology developments focus on solving the problems related to fine patterning and widening process window. <p> </p> One of the technologies is NTD(Negative Tone Development) which uses inverse development compared to PTD(Positive Tone Development). The exposed area is eliminated by positive developer in PTD, whereas the exposed area is remained in NTD. It is well known that NTD has better characteristics compared to PTD in terms of DOF(Depth of Focus) margin, MEEF(Mask Error Enhancement Factor), and LER(Line End Roughness) for both small contact holes and isolated spaces [1]. Contact hole patterning is especially more difficult than space patterning because of the lower image contrast and smaller process window [2]. Thus, we have focused on the trend of both NTD and PTD contact hole patterns in various environments. We have analyzed optical performance of both NTD and PTD according to size and pitch by SMO(Source Mask Optimization) software. Moreover, the simulation result of NTD process was compared with the NTD wafer level performance and the process window variation of NTD was characterized through both results. This result will be a good guideline to avoid DoF loss when using NTD process for contact layers with various contact types.<p> </p>In this paper, we studied the impact of different sources on various combinations of pattern sizes and pitches while estimating DOF trends aside from source and pattern types.

This paper presents an automated DFM solution to generate Bit Line Pattern Dummy (BLPD) for memory devices. Dummy shapes are aligned with memory functional bits to ensure uniform and reliable memory device. This paper will present a smarter approach that uses an analysis based technique for adding the dummy shapes that have different types according to the space available. Experimental results based on layout of Mobile dynamic random access memory (DRAM).

As design rule shrink, overlay has been critical factor for semiconductor manufacturing. However, the overlay
error which is determined by a conventional measurement with an overlay mark based on IBO and DBO often
does not represent the physical placement error in the cell area. The mismatch may arise from the size or pitch
difference between the overlay mark and the cell pattern. Pattern distortion caused by etching or CMP also can
be a source of the mismatch. In 2014, we have demonstrated that method of overlay measurement in the cell
area by using DBM (Design Based Metrology) tool has more accurate overlay value than conventional method
by using an overlay mark. We have verified the reproducibility by measuring repeatable patterns in the cell area,
and also demonstrated the reliability by comparing with CD-SEM data.
We have focused overlay mismatching between overlay mark and cell area until now, further more we have
concerned with the cell area having different pattern density and etch loading. There appears a phenomenon
which has different overlay values on the cells with diverse patterning environment. In this paper, the overlay
error was investigated from cell edge to center. For this experiment, we have verified several critical layers in
DRAM by using improved(Better resolution and speed) DBM tool, NGR3520.

As the technology node shrinks, ArF Immersion reaches the limitation of wafer patterning, furthermore weak point during the mask processing is generated easily. In order to make strong patterning result, the design house conducts lithography rule checking (LRC). Despite LRC processing, we found the weak point at the verification stage of optical proximity correction (OPC). It is called the hot spot point (HSP). In order to fix the HSP, many studies have been performed. One of the most general hot spot fixing (HSF) methods is that the modification bias which consists of “Line-Resizing” and “Space-Resizing”. In addition to the general rule biasing method, resolution enhancement techniques (RET) which includes the inverse lithography technology (ILT) and model based assist feature (MBAF) have been adapted to remove the hot spot and to maximize the process window. If HSP is found during OPC verification stage, various HSF methods can be applied. However, HSF process added on regular OPC procedure makes OPC turn-around time (TAT) increased. <p> </p>In this paper, we introduce a new HSF method that is able to make OPC TAT shorter than the common HSF method. The new HSF method consists of two concepts. The first one is that OPC target point is controlled to fix HSP. Here, the target point should be moved to optimum position at where the edge placement error (EPE) can be 0 at critical points. Many parameters such as a model accuracy or an OPC recipe become the cause of larger EPE. The second one includes controlling of model offset error through target point adjustment. Figure 1 shows the case EPE is not 0. It means that the simulation contour was not targeted well after OPC process. On the other hand, Figure 2 shows the target point is moved -2.5nm by using target point control function. As a result, simulation contour is matched to the original layout. This function can be powerfully adapted to OPC procedure of memory and logic devices.

As the industry pushes to ever more complex illumination schemes to increase resolution for next generation memory and logic circuits, sub-resolution assist feature (SRAF) placement requirements become increasingly severe. Therefore device manufacturers are evaluating improvements in SRAF placement algorithms which do not sacrifice main feature (MF) patterning capability. There are known-well several methods to generate SRAF such as Rule based Assist Features (RBAF), Model Based Assist Features (MBAF) and Hybrid Assisted Features combining features of the different algorithms using both RBAF and MBAF. Rule Based Assist Features (RBAF) continue to be deployed, even with the availability of Model Based Assist Features (MBAF) and Inverse Lithography Technology (ILT). Certainly for the 3x nm node, and even at the 2x nm nodes and lower, RBAF is used because it demands less run time and provides better consistency. Since RBAF is needed now and in the future, what is also needed is a faster method to create the AF rule tables. The current method typically involves making masks and printing wafers that contain several experiments, varying the main feature configurations, AF configurations, dose conditions, and defocus conditions – this is a time consuming and expensive process. In addition, as the technology node shrinks, wafer process changes and source shape redesigns occur more frequently, escalating the cost of rule table creation. Furthermore, as the demand on process margin escalates, there is a greater need for multiple rule tables: each tailored to a specific set of main-feature configurations. Model Assisted Rule Tables(MART) creates a set of test patterns, and evaluates the simulated CD at nominal conditions, defocused conditions and off-dose conditions. It also uses lithographic simulation to evaluate the likelihood of AF printing. It then analyzes the simulation data to automatically create AF rule tables. It means that analysis results display the cost of different AF configurations as the space grows between a pair of main features. In summary, model based rule tables method is able to make it much easier to create rule tables, leading to faster rule-table creation and a lower barrier to the creation of more rule tables.

Until recent device nodes, lithography has been struggling to improve its resolution limit. Even though next generation lithography technology is now facing various difficulties, several innovative resolution enhancement technologies, based on 193nm wavelength, were introduced and implemented to keep the trend of device scaling. Scanner makers keep developing state-of-the-art exposure system which guarantees higher productivity and meets a more aggressive overlay specification. “The scaling reduction of the overlay error has been a simple matter of the capability of exposure tools. However, it is clear that the scanner contributions may no longer be the majority component in total overlay performance. The ability to control correctable overlay components is paramount to achieve the desired performance.(2)” In a manufacturing fab, the overlay error, determined by a conventional overlay measurement: by using an overlay mark based on IBO and DBO, often does not represent the physical placement error in the cell area of a memory device. The mismatch may arise from the size or pitch difference between the overlay mark and the cell pattern. Pattern distortion, caused by etching or CMP, also can be a source of the mismatch. Therefore, the requirement of a direct overlay measurement in the cell pattern gradually increases in the manufacturing field, and also in the development level. In order to overcome the mismatch between conventional overlay measurement and the real placement error of layer to layer in the cell area of a memory device, we suggest an alternative overlay measurement method utilizing by design, based metrology tool. A basic concept of this method is shown in figure1. A CD-SEM measurement of the overlay error between layer 1 and 2 could be the ideal method but it takes too long time to extract a lot of data from wafer level. An E-beam based DBM tool provides high speed to cover the whole wafer with high repeatability. It is enabled by using the design as a reference for overlay measurement and a high speed scan system. In this paper, we have demonstrated that direct overlay measurement in the cell area can distinguish the mismatch exactly, instead of using overlay mark. This experiment was carried out for several critical layer in DRAM and Flash memory, using DBM(Design Based Metrology) tool, NGR2170&trade;.

As design rule of devices are getting smaller, it is hard to obtain enough process window like DOF, EL. In aspect of
device integration, lithography processes which are included in etching process became more and more important. It has
been claimed that photo resist profile is closely related with etch bias and vertical profile. Resist top-loss and bottom
slope seriously affect after-etching profile. In order to address these problems, new model based verification method is
necessary for preventing hot spots.
In this paper, we propose more practical method of model based verification using rigorous simulation and wafer
verification results. Highly accurate model is obtained by physical model fitting with minimal experimental data set.
After that, virtual data are extracted from rigorous simulation model for applying full chip model based verification
modeling. Basically, 2 data sets will be needed for verification of 2-level model, for detecting resist top-loss and bottom-slope.
Finally this article shows comparison results of model based verification and real wafer inspection. And also, we
try to prove that the newly proposed method is another good candidate to address existing problems such as pinching and
bridging after post etching and CMP process.

The insertion of SRAF(Sub-Resolution Assist Feature) is one of the most frequently used method to enlarge the process window area. In most cases, the size of SRAF is proportional to the focus margin of drawn patterns. However, there is a trade-off between the SRAF size and SRAF printing, because SRAF is not supposed to be patterned on a wafer. For this reason, a lot of OPC engineers have been tried to put bigger and more SRAFs within the limits of the possible. The fact that many papers about predicting SRAF printability have been published recent years reflects this circumstance. Pattern dummy is inserted to enhance the lithographic process margin and CD uniformity unlike CMP dummy for uniform metal line height. It is ordinary to put pattern dummy at the designated location under consideration of the pitch of real patterns at design step. However, it is not always desirable to generate pattern dummies based on rules at the lithographic point of view. In this paper, we introduce the model based pattern dummy insertion method, which is putting pattern dummies at the location that model based SRAF is located. We applied the model based pattern dummy to the layers in logic devices, and studied which layer is more efficient for the insertion of dummies.

DRAM chip space is mainly determined by the size of the memory cell array patterns which consist of periodic memory cell features and edges of the periodic array. Resolution Enhancement Techniques (RET) are used to optimize the periodic pattern process performance. Computational Lithography such as source mask optimization (SMO) to find the optimal off axis illumination and optical proximity correction (OPC) combined with model based SRAF placement are applied to print patterns on target. For 20nm Memory Cell optimization we see challenges that demand additional tool competence for layout optimization. The first challenge is a memory core pattern of brick-wall type with a k1 of 0.28, so it allows only two spectral beams to interfere. We will show how to analytically derive the only valid geometrically limited source. Another consequence of two-beam interference limitation is a ”super stable” core pattern, with the advantage of high depth of focus (DoF) but also low sensitivity to proximity corrections or changes of contact aspect ratio. This makes an array edge correction very difficult. The edge can be the most critical pattern since it forms the transition from the very stable regime of periodic patterns to non-periodic periphery, so it combines the most critical pitch and highest susceptibility to defocus. Above challenge makes the layout correction to a complex optimization task demanding a layout optimization that finds a solution with optimal process stability taking into account DoF, exposure dose latitude (EL), mask error enhancement factor (MEEF) and mask manufacturability constraints. This can only be achieved by simultaneously considering all criteria while placing and sizing SRAFs and main mask features. The second challenge is the use of a negative tone development (NTD) type resist, which has a strong resist effect and is difficult to characterize experimentally due to negative resist profile taper angles that perturb CD at bottom characterization by scanning electron microscope (SEM) measurements. High resist impact and difficult model data acquisition demand for a simulation model that hat is capable of extrapolating reliably beyond its calibration dataset. We use rigorous simulation models to provide that predictive performance. We have discussed the need of a rigorous mask optimization process for DRAM contact cell layout yielding mask layouts that are optimal in process performance, mask manufacturability and accuracy. In this paper, we have shown the step by step process from analytical illumination source derivation, a NTD and application tailored model calibration to layout optimization such as OPC and SRAF placement. Finally the work has been verified with simulation and experimental results on wafer.

Extreme ultraviolet lithography (EUVL) is one of the most leading lithography technologies for high volume manufacturing. The EUVL is based on reflective optic system therefore critical patterning issues are arisen from the surface of photomask. Defects below and inside of the multilayer or absorber of EUV photomask is one of the most critical issues to implement EUV lithography in mass production. It is very important to pick out and repair printable mask defects. Unfortunately, however, infrastructure for securing the defect free photomask such as inspection tool is still under development furthermore it does not seem to be ready soon. In order to overcome the lack of infrastructures for EUV mask inspection, we will discuss an alternative methodology which is based on wafer inspection results using DBM (Design Based Metrology) tool. It is very challenging for metrology to quantify real mask defect from wafer inspection result since various sources are possible contributor. One of them is random defect comes from poor CD uniformity. It is probable that those random defects are majority of a defect list including real mask defects. It is obvious that CD uniformity should be considered to pick out only a real mask defect. In this paper, the methodology to determine real mask defect from the wafer inspection results will be discussed. Experiments are carried out on contact layer and on metal layer using mask defect inspection tool, Teron(KLA6xx) and DBM (Design Based Metrology) tool, NGR2170&trade;.

It is a distinctive feature of the metal contact layout in NAND flash memory devices that there are small-pitch contact patterns and random-pitch contact patterns in one layout. This kind of pitch difference between cell array patterns and isolated single patterns hadn’t had a decisive effect on wafers when the illumination condition is not aggressive. However, the pattern pitch difference has caused various problems including the best focus shift due to extreme illuminations. The common DOF margin of a contact layout is degraded when the best focus depth for each pattern is variable. Mask topography effect is well known for the major cause of best focus shift between contact patterns which have different pitches. The demand for device technology node shrink for production cost reduction has required adoption of hyper NA illumination conditions, and this aggressive illumination made it hard to secure an enough common DOF margin due to the best focus shift. In this work, the best focus shift phenomenon among different-pitch patterns caused by mask 3D effects is studied according to the various illumination conditions. It is found that the more aggressive illumination condition is and the bigger the pitch difference among patterns in one layout is, the bigger the best focus shift become. Also, we suggest the solution for avoiding this DOF margin degradation, which is SRAF optimization.

Traditional rule-based and model-based OPC methods only simulate in a very local area (generally less than 1um) to identify and correct for systematic optical or process problems. Despite this limitation, however, these methods have been very successful for many technology generations and have been a major reason for the industry being able to tremendously push down lithographic K1. This is also enabled by overall good across-exposure field lithographic process control which has been able to minimize longer range effects across the field. Now, however, the situation has now become more complex. The lithographic single exposure resolution limit with 1.35NA tools remains about 80nm pitch but the final wafer dimensions and final wafer pitches required in advanced technologies continue to scale down. This is putting severe strain on lithographic process and OPC CD control. Therefore, formerly less important 2nd order effects are now starting to have significant CD control impact if not corrected for. In this paper, we provide examples and discussion of how optical and chemical flare related effects are becoming more problematic, especially at the boundaries of large, dense memory arrays. We then introduce a practical correction method for these systematic effects which reuses some of the recent long range effect correcting OPC techniques developed for EUV pattern correction (such as EUV flare). We next provide analysis of the benefits of these OPC methods for chemical flare issues in 193nm lithography very low K1 lithography. Finally, we summarize our work and briefly mention possible future extensions.

As the industry pushes to ever more complex illumination schemes to increase resolution for next generation memory
and logic circuits; subresolution assist feature (SRAF) placement requirements become increasingly severe. Therefore
device manufacturers are evaluating improvements in SRAF placement algorithms which do not sacrifice main feature
(MF) patterning capability. AF placement algorithms can be categorized broadly as either rule-based (RB), model-based
(MB). However, combining these different algorithms into new integrated solutions may enable a more optimal overall
solution.
RBAF is the baseline AF placement method for many previous technology nodes. Although RBAF algorithm
complexity limits its use with very extreme illumination, RBAF is still a powerful option in certain scenarios. One
example is for repeating patterns in memory arrays. RBAF algorithms can be finely optimized and verified
experimentally without the building of complex models. RBAF also guarantees AF placement consistency based only
on the very local geometric environment, which is important in applications where consistent signal propagation is of
critical importance.
MBAF algorithms deliver the ability to reliably place assist features for enhanced process window control across a wide
variety of layout feature configurations and aggressive illumination sources. These methods optimize sophisticated AF
placement to improve main feature PW but without performing full main feature OPC. The flexibility of MBAF allows
for efficient investigations of future technology nodes as the number of interactions between local layout features
increases beyond what RBAF algorithms can effectively support
Based on hybrid approach algorithms combining features of the different algorithms using both RBAF and MBAF
methods, the generation and placement of SRAF can be a good alternative. Combining of two kinds of SRAF placement
options might result in relatively improved process window compared to an independent approach since two methods
are capable of supplement each other with a complementary advantages.
In this paper we evaluate the impact of SRAF configuration to pattern profile as well as CD margin window and
manufacturing applications of MBAF and Hybrid approach algorithms compared to the current OPC without AF. As a
conclusion, we suggest methodology to set up optimum SRAF configuration using these AF methods with regard to
process window.

With the shrinkage of semiconductor device scales, advanced semiconductor industries face tremendous challenges
in process control. As lithography and etch processes are pushed to get smaller dimensions, the overlay and
wiggling control are hot issues due to the limiting of pattern performance. Many chip makers are using Double
Patterning Technology (DPT) process to overcome design rule limitations but they are also concerned about overlay
control. In DPT process, obtaining accurate overlay data by measuring overlay marks with traditional metrology is
difficult because of the difference of shape and position between cell pattern and overlay marks. Cell to overlay
mark miss-match will occur when there is lens aberration or mask registration error. Therefore, the best way to
obtain accurate overlay data without error is to measure the real cell itself. The overlay of the cell array using DPT
process can be measured by analyzing the relative position of the 2nd exposed pattern to the 1st exposed pattern. But
it is not easy to clearly distinguish a 1st layer and 2nd layer in a patterned cell array image using CD SEM. The
Design Based Metrology (DBM)-system can help identify which cell pattern is a 1st or 2nd layer, so overlay error
between the 1st and 2nd layers at DPT process can be checked clearly. Another noticeable problem in advanced
processing is wiggling. The wiggling of a pattern become severe by the etch process and must be controlled to meet
electrical characteristics of what the semiconductor device requires. The 1st stage of wiggling control is to
understand the level of wiggling which is crucial to device performance. The DBM-system also can be used for
quantification of wiggling by determining specially designed parameters. In this paper we introduce overlay
verification and wiggling quantification through new methodology for advanced memory devices.

There are strong demands for techniques which are able to extend application of ArF immersion lithography.
Especially, the leading edge techniques are required to make very small hole patterns below 50nm. Several
techniques such as double patterning technique, free-form illumination and resist shrinkage technology are
considered as viable candidates. Most of all, NTD (Negative Tone Development) is being regarded as the most
promising technology for the realization of small hole patterns
When NTD process is applied, hole patterns are defined by island type features on the reticle and consequently its
optical performance shows better result compared with PTD (Positive Tone Development) process. However it is
still difficult to define extremely small hole patterns below 40nm, new combination process of NTD with RELACS
is being introduced to overcome resolution limitation. NTD combined with RELACS, which is the most advanced
lithography technology, definitely enable us to generate smaller size hole patterns on the wafer.
A chemical shrinkage technology, RELACS (Resolution Enhancement Lithography Assisted by Chemical Shrink),
utilizes the cross linking reaction catalyzed by the acid component existing in a predefined resist pattern. In case of
PTD combined with RELACS process, we already know that CD change after the shrinkage process is not
influenced by duty ratio. So we could easily reflect the RELACS bias to meet the CD target during OPC (Optical
Proximity Correction) procedure.
But NTD combined with RELACS process was not understood clearly, nor verified. It requires more investigation
of physical behavior during combined process in order to define exact hole patterns. The newly introduced process
might require additive OPC modeling procedure to satisfy target CD when NTD RELACS bias has different values
according to pitch and shape.
This study is going to include the investigation on two types of resist shrinkage process, PTD and NTD. The
optimized OPC methodology will be discussed through the evaluation on simple array hole patterns and random
hole patterns.

State of the art Extreme Ultra Violet Lithography (EUVL) gives high hope for further shrinkage of
semiconductor devices, but currently, EUVL is not ready for 2xnm node manufacturing and ArF immersion
must extend its capability in manufacturing 2xnm devices. Extending the limit of ArF requires varieties of
Resolution Enhancement Techniques (RET) such as inverse lithography (ILT) , double patterning (DPT),
spacer patterning and so on. One of the brightest candidate for extension of ArF for contact layer is negative
tone development (NTD), since this process utilizes the high contrast of the inverse tone of the mask for
patterning. NTD usually results in high process margin compared to conventional positive tone development
(PTD) process1.
Therefore, in this paper we will study application of NTD from optical proximity correction (OPC)
and simulation perspective. We will first discuss difference of NTD from PTD. We will also discuss on how
to optimize NTD process in simulation perspective, from source optimization to simulation calibration. We
will also discuss what to look out for when converting PTD process to NTD process, including OPC models
to design rule modification. Finally, we will demonstrate the superiority of NTD process through modeling
and simulation results with considering these factors mentioned above.

As the design rule shrinks down, various techniques such as RET, DFM have been continuously developed and
applied to lithography field. And we have struggled not only to obtain sufficient process window with those
techniques but also to feedback hot spots to OPC process for yield improvement in mass production. OPC
verification procedure which iterates its processes from OPC to wafer verification until the CD targets are met and
hot spots are cleared is becoming more important to ensure robust and accurate patterning and tight hot spot
management.
Generally, wafer verification results which demonstrate how well OPC corrections are made need to be fed back to
OPC engineer in effective and accurate way. First of all, however, it is not possible to cover all transistors in full-chip
with some OPC monitoring points which have been used for wafer verification. Secondly, the hot spots which
are extracted by OPC simulator are not always reliable enough to represent defective information for full-chip.
Finally, it takes much TAT and labor to do this with CD SEM measurement. These difficulties on wafer verification
would be improved by design based analysis. The optimal OPC monitoring points are created by classifying all
transistors in full chip layout and Hotspot set is selected by pattern matching process using the NanoScope<sup>TM</sup>, which
is known as a fast design based analysis tool, with a very small amount of hotspots extracted by OPC simulator in
full chip layout. Then, each set is used for wafer verification using design based inspection tool, NGR2150<sup>TM</sup>. In this
paper, new verification methodology based on design based analysis will be introduced as an alternative method for
effective control of OPC accuracy and hot spot management.

As technology node of memory devices is approaching around 30nm, the process window is becoming much
narrower and production yield is getting more sensitive to tiny defects which used to be not, if ever, so critical. So it
would be very hard to expect the same production yield as now in near future.
It is possible to classify wafer defects into systematic and random defects. Systematic defects can be also divided
into design related and process related defects. Narrow process window, generally, is thought to be the source of
these systematic defects and we have to extend process window with Design for Manufacturing (DFM) and control
process variation with Advanced Process Control (APC) to ensure the production yield.
The sensitivity of random defects, however, has something to do with the smaller design rule itself. For example, the
narrower spaces between lines are subject to bridge defects and the smaller lines, to pinch defects.
Die to data base (DB) Design Based Metrology (DBM) has mainly been in use for detecting systematic defects and
feedback to DFM and APC so far. We are trying to extend the application of DBM to random defects control. The conventional defect inspection systems are reaching its highest limit due to the low signal to noise ratio for extremely small feature sizes of below 40nm. It is found that Die to DB metrology tool is capable of detecting small but critical defects with reliability.

It is necessary to apply extreme illumination condition on real device as minimum feature size of the device
shrinks. As k1 decrease, ultra extreme illumination has to be used. However, in case of using this illumination, CD and
process windows dramatically fluctuate as pupil shapes slightly changes. For past several years, Pupil Fit Modeling
(PFM) was developed in order to analyze pupil shape parameters which are independent from each others. The first
object in this work is to distinguish pupil shape of different scanner by separating more parameters. According to pupil
parameter analysis, the major factors of CD or process window difference between two scanner systems obviously
appear. Due to correlation between pupil parameter and scanner knob, pupil parameter analysis would be clearly
identified which scanner knob should be compensated. The second object is to define specification of each parameter by
using analysis of CD budget for each pupil parameters. Using periodic monitoring of pupil parameter which is controlled
by previous specification, scanner system in product lines can be maintained at ideal state. Additionally, OPC model
accuracy enhancement should be obtained by using highly accurate fitted pupil model. Recently, other application of
pupil model is reported for improvement of OPC and model based verification model accuracy. Such as modeling using
average optics and hot spot detection of scanner specific model are easily adopted by using pupil fit model. Therefore,
applications of pupil fit parameter for process model are very useful for improvement of model accuracy.
In our study, the quantity of model accuracy enhancement using PFM is investigated and analyzed. OPC and
hotspot point detection capability results with pupil fit model would be shown. Also, in this paper, trends of CD and
process window for each scanner parameter are evaluated by using pupil fit model. As of results, we were able to find
which pupil parameter has influence in critical layer CD and application of this result resulted in better accuracy in
detecting hotspot for model based verification.

As K1 factor for mass-production of memory devices has been decreased to almost its theoretical limit, the process
window of lithography is getting much smaller and the production yield has become more sensitive to even small
variations of the process in lithography. So it is necessary to control the process variations more tightly than ever. In
mass-production, it is very hard to extend the production capacity if the tool-to-tool variation of scanners and/or scanner
stability through time is not minimized. One of the most critical sources of variation is the illumination pupil. So it is
critical to qualify the shape of pupils in scanners to control tool-to-tool variations.
Traditionally, the pupil shape has been analyzed by using classical pupil parameters to define pupil shape, but these
basic parameters, sometimes, cannot distinguish the tool-to-tool variations. It has been found that the pupil shape can be
changed by illumination misalignment or damages in optics and theses changes can have a great effect on critical
dimension (CD), pattern profile or OPC accuracy. These imaging effects are not captured by the basic pupil parameters.
The correlation between CD and pupil parameters will become even more difficult with the introduction of more
complex (freeform) illumination pupils.
In this paper, illumination pupils were analyzed using a more sophisticated parametric pupil description (Pupil Fit
Model, PFM). And the impact of pupil shape variations on CD for critical features is investigated. The tool-to-tool
mismatching in gate layer of 4X memory device was demonstrated for an example. Also, we interpreted which
parameter is most sensitive to CD for different applications. It was found that the more sophisticated parametric pupil
description is much better compared to the traditional way of pupil control. However, our examples also show that the
tool-to-tool pupil variation and pupil variation through time of a scanner can not be adequately monitored by pupil
parameters only, The best pupil control strategy is a combination of pupil parameters and simulated CD using measured
illumination pupils or modeled pupils.

Recently several Design Based Metrologies (DBMs) are introduced and being in use for wafer verification. The
major applications of DBM are OPC accuracy improvement, DFM feed-back through Process Window
Qualification (PWQ) and advanced process control. In general, however, the amount of output data from DBM is
normally so large that it is very hard to handle the data for valuable feed-back. In case of PWQ, more than thousands
of hot spots are detected on a single chip at the edge of process window. So, it takes much time and labor to review
and analyze all the hot spots detected at PWQ. Design-related systematic defects, however, will be found repeatedly
and if they can be classified into groups, it would be possible to save a lot of time for the analysis.
We have demonstrated an EDA tool which can handle the large amount of output data from DBM by classifying
pattern defects into groups. It can classify millions of patterns into less than thousands of pattern groups. It has been
evaluated on the analysis of PWQ of metal layer in NAND Flash memory device and random contact hole patterns
in a DRAM device. Also, verification was tuned to specific needs of the designer as well as defect analysis
engineers by use of EDA tool's 'Pattern Matching Function'. The verification result was well within the required
specification of the designer as well as the analysis engineer. The procedures of Hot Spot Management through
Design Based Metrology are presented in detail.

Recently several Design Based Metrologies (DBMs) are introduced and being in use for wafer verification. The
major applications of DBM are OPC accuracy improvement, DFM feed-back through Process Window
Qualification (PWQ) and advanced process control. In general, however, the amount of output data from DBM is
normally so large that it is very hard to handle the data for valuable feed-back. In case of PWQ, more than thousands
of hot spots are detected on a single chip at the edge of process window. So, it takes much time and labor to review
and analyze all the hot spots detected at PWQ. Design-related systematic defects, however, will be found repeatedly
and if they can be classified into groups, it would be possible to save a lot of time for the analysis.
We have demonstrated an EDA tool which can handle the large amount of output data from DBM by reducing
pattern defects to groups. It can classify millions of patterns into less than thousands of pattern groups. It has been
evaluated on the analysis of PWQ of metal layer in NAND Flash memory device and random contact hole patterns
in a DRAM device.
The result shows that this EDA tool can handle the CD measurement data easily and can save us a lot of time and
labor for the analysis. The procedures of systematic defect filtering and data handling using an EDA tool are
presented in detail

In the field of lithography technology, EUV lithography can be a leading candidate for sub-30 nm technology node.
EUVL expose system has different characteristics compared to DUV exposure system. EUV source wavelength is short
and no material is transparent to the source. So off-axis reflective optic system is used for patterning in place of on-axis
refractive system of DUV system. And different reticle design is needed that consists of 40 pair of Mo/Si multi layer
and absorber layer in place of conventional mask. Because of the oblique incidence on the mask, shadowing effect is
occurred such as pattern asymmetry, shift and pattern bias depending on pattern orientation. For non-telecentric
characteristics of EUV scanner, shadowing effect produces CD variation versus field position[1][2]. Besides, it is well
known that EUV scanner has bigger flare than conventional DUV scanner. Therefore, the correction of mask shadowing
effect and flare level are one of the important issues for EUV lithography.
In this paper, process window and MEF of EUV lithography has been examined by 3D mask simulation. CD
variation by shadowing is simulated for various pattern orientations. A shadowing correction method has been
calculated due to field position to reduce shadowing effect. And the correction effect is examined by simulation and
Experimental results. Principle of radial overlay shift due to field position is verified then the shift length of line and
space pattern is calculated.

Hyper NA system has been introduced to develop sub-60nm node memory devices. Especially memory
industries including DRAM and NAND Flash business have driven much finer technology to improve
productivity. Polarization at hyper NA has been well known as important optical technology to enhance
imaging performance and also achieve very low k1 process. The source polarization on dense structure has
been used as one of the major RET techniques. The process capabilities of various layers under specific
illumination and polarization have been explored.
In this study, polarization characteristic on 40nm memory device will be analyzed. Especially, TE
(Transverse Electric) polarization and linear X-Y polarization on hyper NA ArF system will be compared and
investigated. First, IPS (Intensity in Preferred State) value will be measured with PMM (Polarization
Metrology Module) to confirm polarization characteristic of each machine before simulation. Next simulation
will be done to estimate the CD variation impact of each polarization to different illumination. Third, various
line and space pattern of DRAM and Flash device will be analyzed under different polarized condition to see
the effect of polarization on CD of actual wafer. Finally, conclusion will be made for this experiment and
future work will be discussed.
In this paper, the behavior of 40nm node memory devices with two types of polarization is presented and
the guidelines for polarization control is discussed based on the patterning performances.

The downscaling of the feature size and pitches of the semi-conductor device requires enough process window and good CDU of exposure field for improvement of device characteristics and high yield. Recently several DBMs (Design Based Metrologies) are introduced for the wafer verification and feed back to for DFM and process control. The major applications of DBM are OPC feed back, process window qualification and advanced process control feed back. With these tools, since the applied tool in this procedure uses e-beam scan method with database of design layout like other ones, more precise and quick verification can be done.
In this work the process window qualification procedure will be discussed in connection with EDA simulation results and then method for obtaining good CDU will be introduced. DoseMapperTM application has been introduced for better field CDU control, but it is difficult to fully correct large field with limited data from normal CD SEM methodology. New DBM has strong points in collecting lots of data required for large field correction with good repeatability (Intra / Inter field).

Recently several DBMs(Design Based Metrologies) are introduced for the wafer verification and feed back to DFM.
The major applications of DBM are OPC accuracy feed back, process window qualification and advanced process
control feed back. In general, however, DBM brings out huge amount of measurement data and it is necessary to
provide special server system for uploading and handling the raw data. And since it also takes much time and labor
to analyze the raw data for valuable feed back, it is desirable to connect to EDA tools such as OPC tools or
MBV(Model Based Verification) tools for data analysis. If they can communicate with a common language between
them, the DBM measurement result can be sent back to OPC or MBV tools for better model calibration. For
advanced process control of wafer CDU, DBM measurement results of field CDU can be fed back to scanner for
illumination uniformity correction.
In this work, we discuss tool integration of DBM with other tools like EDA tools. These tool integrations are
targeted for the verification procedure automation and as a result for faster and more exact analysis of measurement
data. The procedures of tool integration and automatic data conversion between them will be presented in detail.

Model based OPC has been generally used to correct proximity effects down to ~50 nm critical dimensions at
k1 values around 0.3. As design rules shrink and k1 drops below 0.3, however; it is very hard to obtain enough process
window and acceptable MEEF (Mask Error Enhancement Factor) with conventional model based OPC. Recently, ILT
(Inverse Lithography Technology) has been introduced and has demonstrated wider process windows than conventional
OPC. The ILT developed by Luminescent uses level-set methods to find the optimal photo mask layout, which
maximizes the process window subject to mask manufacturing constraints.
We have evaluated performance of ILT for critical dimensions of 55nm, printed under conditions
corresponding to k1 ~ 0.28. Results indicated a larger process window and better pattern fidelity than obtained with
other methods. In this paper, we present the optimization procedures, model calibration and evaluation results for 55 nm
metal and contact layers and discuss the possibilities and the limitations of this new technology.

K1 factor for development and mass-production of memory devices has been decreased down to below 0.30 in
recent years. Process technology has responded with extreme resolution enhancement technologies (RET) and much
more complex OPC technologies than before. ArF immersion lithography is expected to remain the major patterning
technology through under 35 nm node, where the degree of process difficulties and the sensitivity to process
variations grow even higher. So, Design for manufacturing (DFM) is proposed to lower the degree of process
difficulties and advanced process control (APC) is required to reduce the process variations. However, both DFM
and APC need much feed-back from the wafer side such as hot spot inspection results and total CDU measurements
at the lot, wafer, field and die level.
In this work, we discuss a new design based metrology which can compare SEM image with CAD data and measure
the whole CD deviations from the original layouts in a full die. It can provide the full information of hot spots and
the whole CD distribution diagram of various transistors in peripheral regions as well as cell layout. So, it is possible
to analyze the root cause of the CD distribution of some specific transistors or cell layout, such as OPC error, mask
CDU, lens aberrations or etch process variation and so on. The applications of this new inspection tool will be
introduced and APC using the analysis result will be presented in detail.

In hyper NA system, specific illumination combined with polarization can be used as one of major RET techniques.
Polarization at high NA dry system is also regarded as important technology to bring improvement of very low k1
process. The benefits of polarization on repeated structure are very well known. However we also need to understand
the effect on random pattern in peripheral region to adopt polarization technology successfully into real devices.
Memory device such as DRAM and NAND Flash has repeated cell structure and also loose pattern in peripheral region.
In this study two kinds of polarization function will be applied to real memory devices and the polarization behavior on
various patterns in peripheral circuit will be analyzed through actual printing process using 6% attenuated PSM at ArF
high NA dry system. The printed result will be compared on random patterns through in-line metrology tool and process
guideline including OPC treatment will be discussed based on this study, especially with regard to ID bias.

The downscaling of the feature size and pitches of the semi-conductor device requires the improvement of device
characteristics and high yield continuously. In lithography process, RET techniques such as immersion and polarization
including strong PSM mask have enabled this improvement of printability and downscaling of device. It is true that
optical lithography is approaching its limit. So other lithographic technique such as EUV is needed but the application is
not yet available. In this point of view, the realization of lithography friendly layout enables good printability and stable
process. And its scope is being enlarged and applied in most semi-conductor devices. Therefore, in order to realize
precise and effective lithography friendly layout, we need full chip data feedback of design issue, OPC error and
aberration and process variables.
In this paper, we report the results of data feedback using new DFM verification tool. This tool enables full chip
inspection through E-beam scan method with fast and accurate output. And these data can be classified with each item
for correction and stability check through die to database inspection. Especially in gate process, total CD distributions in
full chip can be displayed and analyzed for each target with simple method. At first we obtain accuracy data for each
target and CD uniformity from hundreds of thousands of gate pattern. And second we detect a delicate OPC error by
modeling accuracy and duty difference. It is difficult to get from only measurement of thousands pattern. Finally we
investigated specific pattern and area for electrical characteristic analysis in full chip. These results should be
considered and reflected on design stage.

As the minimum transistor length is getting smaller, the variation and uniformity of transistor length seriously effect
device performance. So, the importance of optical proximity effects correction (OPC) and resolution enhancement
technology (RET) cannot be overemphasized. However, OPC process is regarded by some as a necessary evil in device
performance. In fact, every group which includes process and design, are interested in whole chip CD variation trend and
CD uniformity, which represent real wafer.
Recently, design based metrology systems are capable of detecting difference between data base to wafer SEM image.
Design based metrology systems are able to extract information of whole chip CD variation. According to the results,
OPC abnormality was identified and design feedback items are also disclosed. The other approaches are accomplished on
EDA companies, like model based OPC verifications. Model based verification will be done for full chip area by using
well-calibrated model. The object of model based verification is the prediction of potential weak point on wafer and fast
feed back to OPC and design before reticle fabrication. In order to achieve robust design and sufficient device margin,
appropriate combination between design based metrology system and model based verification tools is very important.
Therefore, we evaluated design based metrology system and matched model based verification system for optimum
combination between two systems. In our study, huge amount of data from wafer results are classified and analyzed by
statistical method and classified by OPC feedback and design feedback items. Additionally, novel DFM flow would be
proposed by using combination of design based metrology and model based verification tools.

The minimum feature size of new generation memory devices is approaching down to 50 nm era. And a very precise CD control is demanded not only for cell layouts but also for core and peripheral layouts of DRAM devices. However, as NA of lens system grows higher and higher and Resolution Enhancement Techniques (RETs) becomes more and more aggressive, isolated-dense bias increases and process window for the core and peripheral layouts decreases dramatically. So, the burden of OPC increases in proportion and it is requisite to verify as many features as possible on wafer. If possible, it would be desirable to verify all the features in a die. Recently, a novel inspection tool has been developed which can verify all kinds of patterns on wafer based on Die to Database copmarison method. It can identify all the serious systematic defects of nm order size error from the original layout target and feed back the systematic error points to OPC for more accurate model tuning. In addition we can obtain the full field CD distribution diagram of some specific transistors with hundreds of thousands of measurement data. So, we can analyze the root cause of the CD distribution in a field, such as mask CDU or lens aberrations and so on. And we can also perform Process Window Qualification of all the features in a die. In this paper, OPC verification methodology using the new inspection tool will be introduced and the application to the analysis of full field CD distribution and Process Window Qualification will be presented in detail.

In terms of mass production, the CD variation between exposure tools is not avoidable because of different exposure tool characteristics. The major CD variation is coming from different optical proximity effect (OPE) response between exposure tools. Knowing and control the major contributor to the OPE, ramping up the device will be faster because of one reticle usage in various exposure tools. Therefore, the quantitative measurement and simulation with actual exposure tool characteristics need for analyzing proximity impact to CD. For this purpose, collecting CD data on the wafers and analyzing was carried out to find large ID bias exposure tool. Normal and abnormal exposure tool in terms of proximity matching is inspected using LITEL products of ISI<sup>TM</sup>(In-situ Interferometer) and SMI<sup>TM</sup>(Source Metrology Interferometer). ISI<sup>TM</sup> and SMI<sup>TM</sup> were for collecting machine characteristic and Solid-E<sup>TM</sup> was for simulation purposes. From this study, the practical procedure is proposed to prevent using of large proximity exposure tool for production line and the impact of actual tools characteristic on proximity matching is known.

Polarization is becoming very important technology in micro-lithography at the higher NA lithography for much smaller design. The wide and intensive studies to apply the polarization technology into lithography application have been achieved. Source polarization, mask polarization and projection lens polarization could make different printing results compared to non-polarization cases. Especially k1 factor below 0.3 needs aggressive resolution enhancement techniques. Environmental parameters such as mask CD, lens aberration, stray light, image plane deviation and resist characteristic make CD controllability worse in the very low k1 regime. The polarization technology can contribute to getting better imaging performance. This experiment is challenging k1 factor down to 0.29 with the source polarization function. The source polarization effect on real device will be shown through the simulation and actual printing process using 6% attenuated PSM. The related OPC strategy with the polarized source will also be discussed.

As the design rule of device shrinks down, it is difficult to enlarge the process window, especially DOF (Depth of Focus). It has shown good results in resolution issues with short wavelength, high NA aperture and several RET (Resolution Enhancement Technique) like special illuminator and mask techniques and so on. But it needs to be challenged for DOF process window in contact / via process having various pitch and pattern location. It is a key point in sub 100nm process development and product. It is demonstrated that focus scan method is effective for DOF improvement in contact and via layers. Focus Scan method is one of the focus drilling techniques; it is realized to tilt wafer stage so that the same point on the wafer field can be exposed in limited continual focus range using multiple focal planes through the slit of scanner tool. In this study, confirmation was inspected for simulation and wafer evaluation for focus scan effects in view of process feasibility. DOF increased over 50% with focus scan in contact mask process even though there are several issues to be solved and considered. Energy Latitude (EL) decreased a little by image contrast drop, but if we consider the process window for evolution of device, it is relatively enough for process. OPC or Bias tuning is needed for application in contact layer having various pitch and location, and overlay issues are needed to confirm for each illuminator. From these experiments, it is found that DOF margin can easily be enhanced using focus scan method. Also some fine tuning is required to adequately use this method on production devices.

As the minimum feature size shrinks down, i.e. low K1 lithography regime, the tool’s lens aberration sensitivity and user defined illumination imperfection might play a major role in patterning error. Thus, the study of impact from lens aberration and illumination on patterning is required for good tool maintenance and yield improvement. For this purpose, we collected many cases of abnormal patterning result from production line and then simulated in terms of actual lens aberration and illumination source data. LITEL products of ISI(In-situ Interferometer) and SMI(Source Metrology Interferometer) were used for characterizing lens and illumination source. Moreover, the ACE(Analysis and Characteristic Engine) of LITEL development product was used as the simulator.
In this work, deformation of pattern fidelity, for example, CD asymmetry in word line and metal contact layer, pattern bending in isolation layer and also decreasing process window in bit line layer will be discussed with experimental and simulation data. Finally, we are able to make a guideline for preventing abnormal phenomenon. From this study, we can understand which lens aberration terms and illumination imperfection take an effect of abnormal pattering result.

New generation DRAM devices such as high speed Graphic DRAMs demand smaller size transistors and very precise CD control. However, the application of very high NA and aggressive Resolution Enhancement Techniques (RETs) increases Isolated-dense bias and leaves very small process window for isolated transistor patterns. It implies that a very aggressive and also very delicate OPC work is required for these new generation devices.
A novel measurement system which can compare CD SEM image with CAD data has been developed and we were able to systematically calibrate OPC modeling and verify modeling accuracy by connecting this measurement system with OPC tools. In this paper, the functions of the novel measurement system are presented and the application to the OPC calibration and OPC accuracy verification are presented. This novel measurement system was very useful for 2D model calibration. We were able to enhance OPC accuracy through this systematic OPC calibration and verification methodology.

As optical lithography has been pushed down to its theoretical resolution limit, the application of very high NA and aggressive Resolution Enhancement Techniques (RETs) are required in order to ensure necessary resolution and sufficient process window for DRAM cell layouts. The introduction of these technologies, however, leaves very small process window for core and peripheral layouts. In addition, new generation DRAM devices demand very precise CD control of the core and peripheral layouts. It implies that the time has come to keep a very watchful eye on the core and peripheral layouts as well as DRAM cells. Recently, Process Window Qualification (PWQ) technology has been introduced and is known to be very useful to estimate process window of core and peripheral layouts. Also, novel measurement system which can compare SEM image with CAD data is being developed and it can be of great help to evaluate OPC accuracy and feed back the CD deviation to OPC modeling. Last but not least, New Mask Qualification (NMQ) is proposed to verify very low K1 lithography by comparing with relatively high K1 lithography. In this paper, most effective OPC verification methodologies for sub-100nm node are discussed.

As the optical lithography pushes toward its theoretical resolution limit 0.25k1, the application of aggressive Resolution Enhancement Techniques (RETs) are required in order to ensure necessary resolution, sufficient process window, and reasonable MEEF in critical layers. When chip makers are adopting RETs in low k1 device, there are a lot of crucial factors to take into account in the development and mass production. Those hurdles are not only difficult to overcome but also highly risky to the company, which adopts low k1 mass production strategy. But, low k1 production strategy is very attractive to all chip makers, owing to improving production capacity and cost of ownership. So, low k1 technology has been investigated by many lithography engineers. Lots of materials have been introduced. Most of them are just in RnD level. In this study, low k1 mass production issues shall be introduced, mainly. The definition of low k1 in mass production shall be suggested. And, a lot of low_k1 issues shall be introduced, also. Most of them were investigated/experienced in RnD development stage and final mass production line. Low k1 mass production, is some what different from only RnD development.

In this work, CD control issue at 0.37 K1 optical lithography will be discussed in terms of lens aberration sensitivity. Specific aberration terms that affect CD asymmetry on isolation, word line and storage node layers were investigated by simulation and CD uniformity measurement. The lens aberration was characterized by LITEL ISI (In-Situ Interferometer) and the aberration sensitivity was investigated by Solid-C aerial image simulation. From this result, we can understand the relation between some significant Zernike terms and CD control of DRAM’s critical layers.

One of the crucial factors to take mostly into account the development and production of 130 nm node in low k1 DRAM process is the lens aberration sensitivity control of optical lithographic tools. To meet the required specification these impact of lens aberration resulting from reducing process window caused by pattern deformation, CD uniformity, CD asymmetry, and pattern shift etc. should be understood and considered. In this study, we mainly focused on the aberration sensitivity control for the DRAM isolation layer that is very sensitive to odd components such as coma and three-foil etc. There are a few methods to do this, but the application of extreme sigma setting that is the powerful manner to improvement of asymmetric pattern and layout rotation were examined. It was confirmed that the simulated image and real patterning results for left-right CD difference came from aberrated lens are well matched. In addition, why is the extreme sigma setting more effective than standard settings was investigated with analysis of diffraction patterns on pupil filling of projection lens optics combined with Zernike coefficients phase map.

Leading chip makers are now trying to develop 130 nm technology node recently, using 0.70NA KrF lithography, whose k1 factor is 0.37. It is, however, accepted that it is a real challenge to apply low k1 process under 0.40 to mass- production. So, it is desirable to produce with higher k1 factor using such as 0.80NA KrF or 0.75NA ArF lithography. But, these advanced tools being not available yet, some chip makers who wish to produce 130 nm technology node device earlier have to choose low k1 process with 0.70NA KrF lithography. In mass-production, throughput and production yield are the most significant parameters that can define productivity and both parameters should be considered carefully when determining the size of a field. It is possible to organize several chips in a large field for better throughput, however it can cause degradation of CD uniformity, which can result in production yield drop, especially in low k1 process whose process window is not wide enough. On the contrary, using a small field may contribute to higher production yield, but at the expense of throughput. In this study, a model procedure to determine optimum field size by simulating the relative product yield and throughput is introduced for 130 nm technology node mass-production with low k1 process.

As design rule shrinks down, the role of CMP process is important for obtaining available depth of focus margin in optical lithography. However, the alignment mark deformed by CMP process contributes to the total overlay error budget. This study examines the effect of alignment accuracy with various CMP polishing targets in STI process and optimizes for stable overlay control in gate pattering. At first, polishing uniformity was monitored as polishing targets in STI process and results show that uniformity is getting worse as increasing polishing target. Also, the signal contrast of alignment mark becomes lower and thus, modeled alignment residual is increased. The investigations of modeled alignment residual with signal profile and overlay result as alignment mark type show that convex alignment mark is more sensitive than concave in varying a polishing target. And, the effect of Tungsten gate film stack was considered. In order to make alignment topology, oxide between alignment marks was removed by wet etching process. Then, gate film stack materials, poly, Tungsten, Nitride, oxide, were successively deposited over alignment mark. For stable overlay control, alignment mark type and the optimization of the width of segment alignment mark with concave was examined using modeled alignment residual with signal profile and overlay result. From this study, it is found that narrow width with segment alignment mark and concave type is better than normal bar alignment mark with convex type for overlay control in Tungsten gate patterning.

There are a number of process issues to take into account for patterning sub-0.20micrometers contact holes with optical lithography for Logic device or/and DRAM. Some of the most critical factors for patterning sub-0.20micrometers contact holes are discussed.

My Library

You currently do not have any folders to save your paper to! Create a new folder below.

Keywords/Phrases

Keywords

in

Remove

in

Remove

in

Remove

+ Add another field

Search In:

Proceedings

Volume

Journals +

Volume

Issue

Page

Advanced PhotonicsJournal of Applied Remote SensingJournal of Astronomical Telescopes Instruments and SystemsJournal of Biomedical OpticsJournal of Electronic ImagingJournal of Medical ImagingJournal of Micro/Nanolithography, MEMS, and MOEMSJournal of NanophotonicsJournal of Photonics for EnergyNeurophotonicsOptical EngineeringSPIE Reviews