Related Articles

Related Events

July 2009

Dr. Martin Timm studied electrical engineering at the Technische Universität Darmstadt. He received his Masters (Dipl.-Ing.) in 1994 at the Computational Electromagnetics Laboratory (Institut Theorie elektromagnetischer Felder - TEMF). In the same year he joined CST - Computer Simulation Technology as a Software Development and Application Engineer. In 1995 he returned to the TU Darmstadt as research assistant at TEMF. Alongside the general application of electromagnetic field simulation, Martin Timm specialised in particle accelerator physics. He received his Doctorate in 2000. He returned to CST as Senior Application Engineer, has held responsibility for the University program, was Regional Manager for India, and is now Director of Marketing.

To comment or ask Dr. Timm a question, use the comment link at the bottom of the entry. The first 5 people to comment will receive a copy of the Electrical Engineering Handbook (please include your e-mail and mailing address).

It is widely accepted that three dimensional numerical simulations of electromagnetic fields are essential to the success of in the design of passive components. Obviously simulating a virtual prototype is much cheaper than building hardware and measuring it, in particular if the design cycle time is considered as well. Looking at modern optimized antenna designs, for example, it is arguable whether this design would have been possible at all without electromagnetic (EM) field simulation tools, without automatic optimization, without the possibility to visualize the previously invisible. But saying “all right, let’s go and buy a 3D EM field simulator and everything will be fine” is probably not sufficient. A discussion of the pros and cons of different methods follows here. For an extensive overview I would recommend textbooks such as [1], [2], or an extended version of this article [3] including all references therein.

Numerical Solution of Maxwell’s Equations All numerical approaches to solve Maxwell’s equations partition space into sub-domains, where solutions can be found more easily. A mode-matching code, in its simplest application, composes a waveguide system from sections with known behavior by performing a modal expansion and matching the fields at the intersection areas. A Method-of-Moments code synthesizes the far field of an antenna by integrating the Green’s functions of single metallic surface patches. Volume discretization methods work with even more brute force. They subdivide space into small cells and apply Maxwell’s equations on each such entity. To solve the full problem, all single entity solutions are summed up in a usually large system of equations, which needs to be tackled in one way or another. When discussing the properties of the different methods it is necessary to classify them. A major point of difference is the domain they are working in, which is either time domain or frequency domain. Concentrating on the commercially most relevant methods, we find on the time domain side the Finite Integration Technique (FIT) [4] [5], Finite Difference Time Domain (FDTD) in its explicit [6] [7] or implicit variants [8] [9] and the Transmission Line Matrix method (TLM) [10] [11]. The frequency domain is represented by the Finite Element Method [12] [13], FIT, and the Method of Moments (MoM) [14].

Simulations in Time DomainAll time domain methods that are discussed here – FIT, FDTD, and TLM - feature a Cartesian (or cuboid hexahedral or circular cylindrical coordinate) grid and an explicit time integration scheme. The fields are propagated through the structure by matrix vector multiplications with a specific time step. The maximum possible time step is determined by the smallest mesh cell in the grid. The larger the time step is, the shorter is the simulation time. The memory requirements and the simulation time increase linearly with the number of mesh points. Because of these properties, time domain simulators are well suited to solve electrically large and detail-rich structures. Billions of unknowns have been practically demonstrated (see for example the application in Figure 6).

Steady state signals and 3D electromagnetic fields can be extracted from the transient broadband simulation. Since the excitation signal is broadband, it is possible to obtain fields for various frequencies in one simulation run. A multiband mobile phone, for example, is simulated next to a human head model (Figure 4). Here it is also important to model the frequency dependent behavior of the biological tissues correctly.

Frequency DomainA characteristic of frequency domain solvers is the implicitness of this approach; the resulting system is typically a large linear system of equations. Thus, a matrix inversion is needed in order to obtain the solution for one frequency, no matter whether the grid is structured or not. In commercial applications, FEM on tetrahedral grids [13] is therefore the most popular general purpose numerical method. Tetrahedrons are the simplest volume entities, and their flexibility in approximating arbitrary geometries entails many benefits. However tetrahedron quality is crucial: very flat tetrahedrons may compromise solution speed and accuracy as they make it more difficult for the algebraic solution method to solve the system.

There are two distinct methods of solving the linear systems of equations resulting from FEM discretization: direct and iterative solvers. A direct solver works directly on the system of equations derived from the discretization. Its key advantage is that it can solve for several port excitations at the same time, in parallel. On the other hand, the memory requirements are quite high. Typically the memory requirements increase quadratically with the number of tetrahedrons. Iterative solvers transfer the original system of equations into another one that can be solved by repeated application of operations according to the specific algorithm. The iterative algorithm has to be executed for each excitation individually.

In order to derive broadband results, a sweep of the desired frequency range has to be performed. The number of simulations needed for an accurate broadband result is therefore crucial for the simulation performance.

Mesh Adaptation and ConvergenceThe accuracy of a simulation result has to be tested by performing a convergence study. In a convergence study the number of mesh cells is continuously increased until the results of interest, usually S-parameters, do not change anymore, at least not significantly. A convergence study is thus an essential part of any simulation project.

Many software tools feature automatic mesh adaptation schemes. Typically fields are evaluated after a simulation run. Wherever strong field variations occur, the mesh is refined and the simulation is restarted. This process is repeated until the results do not change significantly anymore.

Although convergence study and mesh adaptation appear to be very similar approaches to guarantee accurate results, in practice they are different. For a convergence study we assume that the geometry approximation of the structure as well as the result in the entire frequency range of interest is improved continuously with the refined mesh.

Frequency domain solvers perform the mesh adaptation typically only for one frequency, per default usually the highest frequency in the band of interest. The highest frequency is - for example for filters - not necessarily the one that is relevant for the device functionality. A relevant frequency has to be chosen for mesh adaptation. This information is reliably available only a- posteriori. In addition, the field distribution might change significantly with frequency, e.g. for multiplexers or multiband antennas. One single adaptation frequency is not sufficient in such cases. Either the simulation has to be split up into several separate frequency bands, or several adaptations frequencies have to be used in one simulation over the entire band.

Figure 1. Final mesh after the adaptation process for a piece of coaxial cable. Left: traditional mesh adaptation; Right: true geometry adaptation leads to a good approximation of the geometry and hence to more accurate results

In order to derive accurate simulation results, the geometry representation on the grid has to be as good as possible. Particularly tetrahedral grid based frequency domain solvers do often not improve the geometry approximation during mesh adaptation. In the mesh adaptation process the initial tetrahedrons are simply subdivided in order to improve the field sampling (Figure 1, left), whereas mechanisms as true geometry adaptation also improve the geometry representation (Figure 1, right). In traditional adaptation schemes we therefore see a convergence of results, though not for the input model but for the initially approximated geometry. This effect is even more critical if shapes are segmented before simulation (Figure 2).

Figure 2 Mesh adaptation and convergence. The cylinders of the connector model are segmented before meshing. The small connector pictures show the 6 segment (left) and the 12 segment (right) version. In all case a mesh adaptation was performed and the S-parameters were converged.

In contrast to frequency domain, the time domain approaches can perform the mesh adaptation broadband. Moreover, every refinement also entails a better geometry approximation, since the entire meshing process is restarted at every mesh adaptation step.

Finally it should be mentioned that, unlike a tetrahedral mesh, a structured time domain grid can be easily controlled by the user, by manipulating mesh lines or meshing densities. Thus the final mesh of an adaptation is nearly reproducible by the user without rerunning the adaptation.

Simulation Performance in Time Domain Methods In traditional FDTD and TLM methods, every hexahedral mesh cell is filled entirely with one material. This leads to the so called staircase approximation of the geometry. Obviously such a discretization can make the accurate geometrical representation of many practical devices very difficult, since most components contain rounded features. In order to increase the accuracy in such cases, very fine meshing needs to be applied. Conformal methods, such as the PERFECT BOUNDARY APPROXIMATION (PBA)® [18], can improve the geometry description without compromising the memory efficiency of standard FDTD [7]. The performance increase through such a method is remarkable, as we can also see in our connector example (Figure 3). Not only that a smaller number of mesh cells is needed, but the larger mesh cells additionally entail a larger time step.

Finally, it is interesting to see how the results converge to a final solution when the mesh is refined. The PBA convergence process is very smooth and extraordinary fast (Figure 3, left); it can be confidently assumed that every increase in mesh density will improve the result’s accuracy. This statement is not true for staircase approximations where convergence is slow and not steady (Figure 3, right).

Figure 3. Solution convergence for the connector example. When making the mesh finer and finer, the S-parameter results in the PBA case get closer and closer to the final solution. For the staircase mesh, the convergence is not as smooth as in the PBA case. It takes the staircase model 15 times longer on the same computer to reach the same convergence goal. For a comparison of the converged results refer to Figure 5.

Improving the PerformanceBesides improving the actual simulation speed by employing modern computing hardware architectures (multicore, cluster, GPU), there are many means of improving performance by advancing the algorithms. Introducing a conformal method is such a means, as can be easily seen in the section above.

In TLM’s compact models fine structured elements like slots, vents, or cables are replaced by specific macro models, in order to avoid the sampling of all details by the grid. This approach has been proven particularly useful in EMC applications.

The standard FDTD grid is structured. This means that every mesh line starts on one side of the calculation domain and ends on the other side. In order to avoid the increase of mesh cells in the outer regions, sub-gridding algorithms have been introduced. Additional speed up of the simulation can be achieved by using different time steps at different mesh levels. The example in Figure 4 illustrates the impact of a mesh with hierarchical sub-grids. It was solved with a subgridding algorithm with mathematically proven stability [17]. The computing time is reduced significantly, by a factor of 9.5.

Figure 4. Subgridding mechanisms reduce the number of mesh points in a simulation. In this example the full grid (left) is 20 times larger (35e6 mesh nodes) than the subgridded version (right, 1.75e6 mesh nodes).

Performing Simulations in Time and Frequency DomainThe accuracy of a simulation, namely the accordance of simulation results and the behavior in reality, is usually limited due to simplifications in the simulation model. Having the simulation results in front of us, we may wonder whether these are the true S-parameter of our device. All numerical methods promise that the simulation results will eventually converge against the actual solution, if only the mesh is fine enough and all details and effects are represented in the numerical model. If the results of interest do not change significantly anymore after several mesh refinement steps, the converged solution has been reached. Cross verification of the results by applying two different numerical approaches to the same problem gives even more confidence, e.g. by comparing the time domain solution and frequency domain solutions (Figure 5). This reassurance is even more convenient to reach, if the simulation software offers the possibility to switch between numerical approaches without changing the interface.

Figure 5. S-parameters of the connector example derived with different solution methods: 1. Frequency domain solver on a tetrahedral grid with 0.150 million tetrahedra (FD-TET). 2. Time domain solver with PBA on a hexahedral grid with 0.7 millionmesh cells (TD-PBA). 3. Time domain solver / staircase on a hexahedral grid with 17 million cells (TD-Staircase). The comparison shows good agreement between cases 1 and 2.

As we can see in Figure 5, both approaches, frequency and time domain, deliver the same results. There is just another constraint, which has not yet been considered – the simulation performance. It is defined by the time required for a simulator to reach a predefined accuracy. For our connector and for the accuracy specified in Figure 3, the simulation time does not differ much between a FIT transient solver (1 min.) versus a FIT-FEM frequency domain solver (1.5 min.). However, for other applications the difference in computing time may be significant.

Time Domain vs. Frequency DomainOne of the interesting properties of time domain simulation is that we can calculate the s-parameter with an arbitrarily fine frequency resolution, and, most importantly, without additional computational effort. Unlike in frequency domain, in a transient simulation it is thus virtually impossible to miss sharp resonances inside the requested spectral range.

In a time domain simulation a signal has to enter and leave the device under test. Our connector does not have any resonances; it is supposed to behave like a broadband transmission line. Therefore the simulation runs quickly. The transient simulation can be terminated when a steady state is reached, or when the signal can be predicted by using digital signal processing techniques. Frequency domain solvers do not face this problem. To find the resonance frequency in high Q structures may require numerous simulations though.

Although the finite thickness of some metallization is technically relevant, it is usually not considered in most solvers. In FEM its inclusion leads to a large number of tetrahedrons at the edges, or to tetrahedrons with a poor quality. In standard time domain methods, the thickness has to be sampled by a mesh cell. This will not lead to a large increase in mesh size. However, these cells will be very small, which will in turn reduce the time step because of the CFL criterion, and therefore increase the simulation time. Conformal methods such as FIT with PBA face no problem here, because the metallization thickness can be considered inside a mesh cell without compromising the time step.

There is one other distinctive feature, the electric size. Generally the discussed high frequency solvers are effective for electric structure sizes within the range of about 1/1000 to 1000 wavelengths. The lower bound of this frequency range sees slight performance advantages for the general purpose frequency domain. Towards the upper bound the memory requirements become relevant.

On a typical workstation (8GB RAM) problem sizes of about 40 wavelengths in each spatial direction can be tackled with a transient solver, whereas 2nd order FEM is restricted to about 10 wavelengths. The introduction of higher order elements however enables the solution of electrically larger problems with FEM as well. Since the memory consumption for a 3rd order element is much larger than for a 1st or 2nd order one, it is not well suited for detail rich structures. The introduction of mixed order elements can be a solution here.

Beyond these problem sizes, the use of an MLFMM solvers or even an asymptotic method is advisable. They are specifically designed to efficiently tackle electrically very large problems.

Time domain naturally offers the possibility to study the transient behavior of electromagnetic structures. The simulator can also work as virtual time domain reflectometer (TDR), Delay times and signal degradation on signal lines can be directly simulated. But also fields can be studied in time domain: e.g. transient farfields become increasingly important in ultra wide band (UWB) applications. In multiport devices, every port can be excited individually with a different time signal and the fields can be monitored accordingly.

The lower memory requirements of the time domain methods allow also the solution of very detail rich structures (Figure 6). In the scope of this article we do not want to discuss what can be done on a hardware level, to extend simulation speed or accessible model sizes.

Frequency domain solvers are well suited to solve infinite periodic problems, such as phased arrays, frequency selective surfaces (FSS), photonic band gap structure (PBG), etc. Periodic boundaries can be set up either with a phase difference between them, or more practically with a certain scan angle. A Floquet mode port is a useful addition to this capability. It enables the usage of plane waves to monitor polarization or RCS, as well as the determination of main and grating lobes of a phased array.

figure 6. A complete IBM package layout used for full wave signal integrity analysis. It consists of eight metallization layers and 40,000 geometrical entities. The full package, shown here in total and detailed view, was imported into CST MWS for a full wave analysis. The benchmark fraction was solved by using the transient approach and the FD solver (27 million mesh nodes and 5.3 million tetrahedrons, respectively). The transient solver model of the full package had 640 million mesh cells and 3.7 billion of unknowns respectively This level of detail made the usage of the FD solver for a simulation of the full package unfeasible.

ConclusionsHow can I select the best simulator? Most people would think it is obvious. Some will take the most accurate. Others will take the quickest, or the cheapest. All these selection criteria need to be looked at as a whole in order to make an informed choice. A good overall criterion would be something like the simulator’s “quality factor”: Q = Accuracy / Effort.

Therefore, choose the program which gives you the best accuracy for a given simulation duration, or sum of money, or RAM, or all three together (= Effort) and you won’t do anything wrong. Or, if it’s accuracy that is of utmost importance, choose the program that achieves the desired accuracy with the least time and memory effort. By the way, do not forget about the labor costs while integrating the software into your design flow. A program with a good user interface and a high degree of automation will save valuable engineer’s worktime and therefore money.

Beware of brute-force hardware arguments like “on a cluster, program X is also very quick”. An intelligent algorithm is quick on any type of hardware and is even quicker on a faster computer, clusters or graphics acceleration card. It has to be said that sometimes the figures that are produced to demonstrate the value of performance improvements to the audience, are not achieved with practical examples. But the purchase of such a package is not a result of window shopping. The application to your actual simulation task from setting up the model to the final results will be most illustrative.

It’s the combination of different solution approached, intelligent algorithms and best available hardware that will give the you the optimal computing speed.