A two-dimensional, cellular automata model for atomic layer etching (ALE) is presented and used to predict the etch rate and the evolution of the roughness of various surfaces as a function of the efficiencies or probabilities of the adsorption and removal steps in the ALE process. The atoms of the material to be etched are initially placed in a two-dimensional array several layers thick. The etch follows the two step process of ALE. First, the initial reaction step (e.g., Cl reacting with Si) is assumed to occur at 100% efficiency activating the exposed, surface atoms; that is, all exposed atoms react with the etching gas. The second reaction step (e.g., Ar ion bombardment or sputtering) occurs with efficiencies that are assumed to vary depending on the exposure of the surface atoms relative to their neighbors and on the strength of bombardment. For sufficiently high bombardment or sputtering, atoms below the activated surface atoms can also be removed, which gives etch rates greater than one layer per ALE cycle. The bounds on the efficiencies of the second removal step are extracted from experimental measurements and fully detailed molecular dynamics simulations from the literature. A trade-off is observed between etch rate and surface roughness as the Ar ion bombardment is increased.

Predicting the etch and deposition profiles created using plasma processes is challenging due to the complexity of plasma discharges and plasma-surface interactions. Volume-averaged global models allow for efficient prediction of important processing parameters and provide a means to quickly determine the effect of a variety of process inputs on the plasma discharge. However, global models are limited based on simplifying assumptions to describe the chemical reaction network. Here a database of 128 reactions is compiled and their corresponding rate constants collected from 24 sources for an Ar/CF4 plasma using the platform RODEo (Recipe Optimization for Deposition and Etching). Six different reaction sets were tested which employed anywhere from 12 to all 128 reactions to evaluate the impact of the reaction database on particle species densities and electron temperature. Because many the reactions used in our database had conflicting rate constants as reported in literature, we also present a method to deal with those uncertainties when constructing the model which includes weighting each reaction rate and filtering outliers. By analyzing the link between a reaction’s rate constant and its impact on the predicted plasma densities and electron temperatures, we determine the conditions at which a reaction is deemed necessary to the plasma model. The results of this study provide a foundation for determining which minimal set of reactions must be included in the reaction set of the plasma model.

The design and optimization of highly nonlinear and complex processes like plasma etching is challenging and timeconsuming. Significant effort has been devoted to creating plasma profile simulators to facilitate the development of etch recipes. Nevertheless, these simulators are often difficult to use in practice due to the large number of unknown parameters in the plasma discharge and surface kinetics of the etch material, the dependency of the etch rate on the evolving front profile, and the disparate length scales of the system. Here, we expand on the development of a previously published, data informed, Bayesian approach embodied in the platform RODEo (Recipe Optimization for Deposition and Etching). RODEo is used to predict etch rates and etch profiles over a range of powers, pressures, gas flow rates, and gas mixing ratios of an CF4/Ar gas chemistry. Three examples are shown: (1) etch rate predictions of an unknown material “X” using simulated experiments for a CF4/Ar chemistry, (2) etch rate predictions of SiO2 in a Plasma-Therm 790 RIE reactor for a CF4/Ar chemistry, and (3) profile prediction using level set methods.

Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.

A fast and inexpensive scheme for etch rate prediction using flexible continuum models and Bayesian statistics is demonstrated. Bulk etch rates of MgO are predicted using a steady-state model with volume-averaged plasma parameters and classical Langmuir surface kinetics. Plasma particle and surface kinetics are modeled within a global plasma framework using single component Metropolis Hastings methods and limited data. The accuracy of these predictions is evaluated with synthetic and experimental etch rate data for magnesium oxide in an ICP-RIE system. This approach is compared and superior to factorial models generated from JMP, a software package frequently employed for recipe creation and optimization.

Polymer shrinkage from curing in nanoimprint lithography (NIL) strongly affects the ultimate shapes of two- and three-dimensional structures produced following etching. We computationally study the curing step in the NIL process and predict the shape changes caused by polymer shrinkage. The shape changes are predicted for crosses, diamonds with sharp and rounded tips, and multitiered structures that are applicable for multibit memory devices and dual damascene processes. The shape changes from curing are shown to be governed by the shrinkage coefficient of the polymer resist, its Poisson’s ratio, and the geometric aspect ratios of the shapes. Finite element simulations demonstrate that shape change due to polymer densification is equal to the average volumetric contraction of the resist material, but shrinkage is not isotropic and vertical displacement dominates. The thickness of the residual layer does not impact the final profile of the imprinted shapes considered. Further analysis shows that diamonds with sharp tips stay sharp while the tips of rounded diamonds get sharper. Additionally, shape changes for multitiered structures are not uniformly distributed among the tiers. Using etch simulations, we demonstrate the significant impact of polymer shrinkage on the final feature profile.

Nanosculpting, the fabrication of two- and three-dimensional shapes at the nanoscale, enables applications in
photonics, metamaterials, multi-bit magnetic memory, and bio-nanoparticles. A promising high resolution and high
throughput method for nanosculpting is nanoimprint lithography (NIL). A key requirement to achieving
manufacturing viability of nanosculptures in NIL is maintaining image fidelity through each step of the imprinting
process. In particular, polymer densification during UV curing can distort the imprinted image. Here we study the
shape changes introduced by polymer densification and develop a forward method for predicting changes in
nanoscale geometries from UV curing. We show that shape changes by polymer densification are governed by the
Poisson’s ratio, the shrinkage coefficient of the polymer resist, and the geometric aspect ratios of the nanosculpted
shape. We also show that the size of the residual layer does not impact the final profile of the imprinted shape.

A lattice-type Monte Carlo–based mesoscale model and simulation of the lithography process have been adapted to study the insoluble particle generation that arises from statistically improbable events. These events occur when there is a connected pathway of soluble material that envelops a volume of insoluble material due to fluctuations in the deprotection profile. The simulation shows that development erodes the insoluble material into the developer stream and produces a cavity on the line edge that can be far larger than a single polymer molecule. The insoluble particles can coalesce to form aggregates that deposit on the wafer surface. The effect of the resist formulation, exposure, postexposure bake, and development variables on particle generation was analyzed in both low- and high-frequency domains. It is suggested that different mechanisms are dominant for the formation of line-edge roughness (LER) at different frequencies. The simulations were used to assess the commonly proposed measures to reduce LER such as the use of low molecular weight polymers, addition of quenchers, varying acid diffusion length, etc. The simulation can be used to help set process variables to minimize the extent of particle generation and LER.

A lattice-type Monte Carlo based mesoscale model and simulation of the lithography process has been described
previously [1]. The model includes the spin coating, post apply bake, exposure, post exposure bake and development
steps. This simulation has been adapted to study the insoluble particle generation that arises from statistically
improbable events. These events occur when there is a connected pathway of soluble material that envelops a volume of
insoluble material due to fluctuations in the deprotection profile that occur during the post exposure bake [2].
Development erodes the insoluble material into the developer stream as an insoluble particle. This process may produce
a cavity on the line edge that can be far larger than a single polymer molecule. The insoluble particles generated may
coalesce in developer to form large aggregates of insoluble material that ultimately deposit on the wafer surface and the
tooling. The recent modifications made in mesoscale models for the PEB and dissolution steps, which have enabled this
study are briefly described. An algorithm that was used for particle detection in the current study is also discussed. The
effect of the resist formulation and the different lithographic steps, namely, exposure, post exposure bake and
development, on the extent of particle generation is analyzed. These simulations can be used to set process variables to
minimize the extent of particle generation.

The current scale of features size in the microelectronics industry has reached the point where molecular level
interactions affect process fidelity and produce excursions from the continuum world like line edge roughness (LER).
Here we present a 3D molecular level model based on the adaptation of the critical ionization (CI) theory using a
fundamental interaction energy approach. The model asserts that it is the favorable interaction between the ionized part
of the polymer and the developer solution which renders the polymer soluble. Dynamic Monte Carlo methods were used
in the current model to study the polymer dissolution phenomenon. The surface ionization was captured by employing an
electric double layer at the interface, and polymer motion was simulated using the Metropolis algorithm. The
approximated interaction parameters, for different species in the system, were obtained experimentally and used to
calibrate the simulated dissolution rate response to polymer molecular weight and developer concentration. The
predicted response is in good agreement with experimental dissolution rate data. The simulation results support the
premise of the CI theory and provide an insight into the CI model from a new prospective. This model may provide a
means to study the contribution of development to LER and other related defects based on molecular level interactions
between distinct components in the polymer and the developer.

In order to quickly and cheaply test candidate fluids and coatings for immersion lithography, we have devised a fluid handling scheme that we call drag-a-drop. We have constructed a prototype tool in order to test materials using this fluid scheme, and conducted several experiments with it. From these tests, we have determined that a hydrophobic topcoat with low contact angle hysteresis on the substrate increases the maximum stable scanning velocity by at least a factor of 2 over a standard 193 nm photoresist. We observed that instabilities on the receding contact line are unaffected by height, but the onset of instability on the advancing contact line occurs when the height of the lens is low. We also examined the drag-a-drop technique for possible use in laser mask writing, and found that by means of a hydrophobic topcoat, the lens can be completely removed from the substrate while keeping the immersion droplet affixed to the lens.

Step and Flash Imprint Lithography (SFIL) is a photolithography process in which the photoresist is dispensed onto the wafer in its liquid monomer form and then imprinted and cured into a desired pattern instead of using traditional optic systems. The mask used in the SFIL process is a template of the desired features that is made using electron beam writing. Several variable sized drops of monomer are dispensed onto the wafer for imprinting. The base layer thickness at the end of the imprinting process is typically about 50 nm, with an approximate imprint area of one square inch. This disparate length scale allows simulation of the fluid movement through the template-wafer channel by solving governing fluid equations that are simplified by lubrication theory. Capillary forces are also an important factor governing fluid movement; a dimensionless number known as the capillary number is used to describe these forces. This paper presents a simulation to model the flow and coalescence of the multiple fluid drops and the effect the number of drops dispensed has on final imprint time. The imprint time is shown to decrease with the use of increasing numbers of drops or with the use of an applied force on the template. Appropriate filling of features in the template is an important issue in SFIL, so a mechanism for handling the interface movement into features using a modified boundary condition is outlined and examples are. Fluid spreading outside of the mask edge is also an issue that is resolved by results from this study. The simulation is thus a useful predictive tool providing insight on the effect multiple drop configurations and applied force have on imprint time, as well as providing a means for predicting feature filling.

My Library

You currently do not have any folders to save your paper to! Create a new folder below.

Keywords/Phrases

Keywords

in

Remove

in

Remove

in

Remove

+ Add another field

Search In:

Proceedings

Volume

Journals +

Volume

Issue

Page

Journal of Applied Remote SensingJournal of Astronomical Telescopes Instruments and SystemsJournal of Biomedical OpticsJournal of Electronic ImagingJournal of Medical ImagingJournal of Micro/Nanolithography, MEMS, and MOEMSJournal of NanophotonicsJournal of Photonics for EnergyNeurophotonicsOptical EngineeringSPIE Reviews