How And Where ML Is Being Used In IC Manufacturing

Semiconductor Engineering sat down to discuss the issues and challenges with machine learning in semiconductor manufacturing with Kurt Ronse, director of the advanced lithography program at Imec; Yudong Hao, senior director of marketing at Onto Innovation; Romain Roux, data scientist at Mycronic; and Aki Fujimura, chief executive of D2S. What follows are excerpts of that conversation. Part one of this discussion can be found here.

L-R: Yudong Hao, Romain Roux, Aki Fujimura, Kurt Ronse.

SE: What are some of the key applications for machine learning in chip manufacturing?

Fujimura: We use it at D2S for our ILT (inverse lithography technology) product in two ways. At the recent SPIE Advanced Lithography conference, we presented a paper on using deep learning to accelerate the simulation of mask 3D, a complex lithography effect. A rigorous mask 3D simulation would take too long to be useful. A deep learning estimator is fast and enables ILT to incorporate mask 3D effects. The second way was discussed at last year’s BACUS conference, where we talked about using deep learning for initial embedding of the iterative optimization process to speed up ILT. We run ILT on a bunch of patterns. Next, we tell the deep learning engine to recognize the transformation from the input target wafer patterns to the output of ILT, which are the required mask shapes. Deep learning is very good at estimating quickly, so we’re able to get a good approximation of the final result quickly to reduce the number of optimization iterations. Ours and other papers have shown a 2X improvement in ILT/OPC run times, while improving the quality of results by using this technique.

Ronse: Let’s continue along the lines of OPC. In the early days when OPC was introduced, it was good enough to simulate your aerial image. And then the only thing you had to do was choose your threshold. As such, the aerial image simulator predicted where the lines would deviate from the targets. The accuracy becomes more important as the specs become tighter. So aerial image alone is not good enough. Also, the resist process has a certain contribution. And then you add a number of knobs in the resist process. You have to optimize to make sure that your simulator is mimicking what your process is doing. Now after resist processing, there is also etch. Etch can have some effects that are going to deviate your ultimate features from what you want them to be. So in the end, you’re going to have many knobs. If you do everything manually, it could take forever to fit a model that is representing your process. So that’s where you are trying to do some machine learning. We have this model with all the knobs. And then we start from a certain starting point with the parameters, which are randomly chosen. And then we let the simulator run, check it, and it self-corrects some of the knobs. Some of the knobs are not important. Some of the knobs are very crucial and have an enormous impact on the final results. So the more complex the process, and especially in EUV, the more knobs you get. That means you have to rely on machine learning to make sure that you find the optimum settings for all the knobs in a reasonable time.

SE: So what does this accomplish?

Ronse: The model generation in the OPC can be done by machine learning. It will speed up the model generation. If you do that manually with so many variables, which may be related or not related, it can take forever to have a good model. With machine learning, where you do all these iterations automatically in a very fast way, you can come to the optimum point with a minimum deviation in the much faster rate.

SE: There are some issues that are holding machine learning back from being broadly used in chip manufacturing. My impression is that you need large data sets to get better results. Otherwise, the results may or may not be accurate. However, most equipment vendors can’t afford to develop large data sets. Do you need a large data set here?

Hao: Generally, that is true. Deep learning is based on big data. You cannot solve a lot of unknowns with just a few equations. That’s the basic concept. However, in the metrology world, there are techniques that we can use to deploy deep learning with a small label set.

Roux: I would say that the quality of data is one of the most crucial keys for success. Good data means enough data, well-labeled data, and generated in a context that is as close as possible to the final application. Also, the data should describe all of the typical cases you want your model to handle well. To do that we need to get data directly from real production environments. This is why we need to work tightly with our industrial partners if we want to continue providing high-end equipment to them.

SE: In the future, will machine learning become pervasive in chip manufacturing in general? Or will we continue to use the traditional physics-based model approach? Or are both approaches viable?

Roux: Physics remains the only reliable foundation on which we build simulators. Machine learning-based modules can mimic some behavior, but you will need physics to distinguish between correlation and causality. Physical models give you accuracy and deep understanding, while machine learning can provide you speed and help you solve some challenging inverse problems. There are two very distinct domains that are feeding each other.

Fujimura: There’s no question in my mind that machine learning, and in particular, the deep learning subset of machine learning, will become increasingly pervasive in the photomask world. Mask makers will continue to use the traditional methods, but they will gradually incorporate new capability as they become available in production form. One of the characteristics of deep learning is that a demonstration of promise or feasibility can be created very quickly. However, productizing the capability still takes time. We have already started to see products incorporating deep learning both in software and in equipment. Any tedious and error-prone process that human operators need to perform, particularly those involving visual inspection, are great candidates for deep learning. There are many opportunities in inspection and metrology. There are also many opportunities in software to produce more accurate results faster to help with the turnaround time issue in leading-edge mask shops. There are many opportunities in correlating big data in mask shops and machine log files with machine learning for predictive maintenance.

Ronse: Potentially, it can be used everywhere. In one example, you can imagine the complexity of an EUV machine with the source and rest of the optics. They have lots of sensors in the machine that produce and generate data. There is not a single engineer anymore who can basically handle and make something useful out of this data. There, they can use deep learning to try to predict trends. For example, you could use it for preventive maintenance before the tool goes unexpectedly down.

Hao: As to how much we use machine learning, that can vary from one type of application to another. Let’s talk about inspection. Inspection, for example, involves ADC or automatic defect classification. That’s an ideal use case for machine learning, because it’s based on image analysis and image classification. Usually, many labeled data is used in inspection. That’s where deep learning can be applied most readily.

SE: Can you give us an example of how metrology works? And how does it apply to machine learning?

Hao: In one example, optical critical-dimension metrology (OCD) is a high throughput inline metrology technique for process control. For logic devices, you can use it for fin measurements like fin profiles and fin height. There is also a very critical parameter called proximity that decides the device performance. We measure those critical dimensions in the nanometer range at an accuracy in the sub-angstrom level. Besides OCD, metrology also includes CD-SEMs, CD-SAX and others. Let me focus on OCD, which is based on scatterometry. In this world, we usually do not have a very large data set like on the image processing side. The reference data obtained from a TEM is very expensive and we only have a very small labeled data set. We have a product that uses machine learning for OCD. We are actively exploring deep learning for this application.

SE: Does machine learning solve every problem in metrology?

Hao: First, in metrology, the number one thing is that you need to have sensitivity. Your tool must have the sensitivity to the dimensional change that is happening in the process. Without any sensitivity, no machine learning or any other technology will help you. Secondly, because of the low sensitivity and the complexity of the device we’re measuring, using classic physics-based modeling technology is no longer sufficient. That’s where machine learning comes into play. On the other hand, machine learning itself may not be the sole solution. Physics is still important. Physical models and machine learning models are both predictive models. We found out that by combining physics and machine learning together, we can get the best performance. Machine learning is complementary to physics. It can help physics, but it is not going to replace physics.