HFSS does have limited GPU acceleration capability, but only for transient analysis. If you wish to use this, please contact the HPC Group.

HFSS will, by default, use /tmp on whatever machine it is running on. The /tmp partitions on Flux nodes are not large, so it is possible to fill them and interfere with other jobs. If you are running a large simulation, you should set the temporary directory to something else, for example, to your project directory under /scratch.

To set the temporary directory, you use the command line option -batchoptions, as in the following example (the character indicates that the next line is a continuation of the current command).

Note, the whole string argument to -batchoptions is enclosed in single quotes, then inside those, the directory name is enclosed in double quotes.

Running HFSS interactively

You should only run HFSS interactively – outside of PBS – for very short, test runs.

We will copy the ogive-IE.hfss input file from the HFSS Examples directory to use for the example. Here is an example of running HFSS in batch. The -Ng option suppresses trying to open the GUI and is required here because there is no X display. The variable $HFSS_ROOT is set by the module, and we use it to copy the example data file.

Running HFSS in single node shared memory parallel (SMP) mode

Not all simulation types are parallelizable, but the largest number of those that are can take advantage of shared memory parallel processing. This type of parallel processing is confined to a single node. The multinode parallel options will be shown in the next section.

To run an SMP simulation, you need to request from PBS more than one processor on the node with -l ppn=N. In the example below, we request and use 4 processors.

Running HFSS in mixed multicore, multinode parallel mode

This job type lets HFSS use multiple nodes to ether run multiple values in a sweep simultaneously using the distributed solve option (DSO) or to take a very large model and and use the domain decomposition method (DDM) to run a single model one multiple nodes. There are fewer simulation types supported using these methods. If you will be using fewer than 12 processors for anything other than sweeps, please use the MP option above to obtain the best performance. DDM and DSO require atleast 3 unique nodes. Contact support, as the system will sometimes condense requests onto fewer nodes.

Fast sweeps do not support DSO as of HFSS 15.0 (2014.0.0).

To use DDM, this must be set in your analysis properties under your solver type. For more details on supported configurations for HPC/multinode options options in HFSS refer to the HFSS documentation and this Ansys Presentation.