Padding the size of certain power-of-two arrays to allow
more efficient cache use.

On IA-32 and Intel EM64T processors, when O3 is used with options
-ax or -x (Linux) or with options /Qax or /Qx (Windows), the compiler
performs more aggressive data dependency analysis than for O2, which
may result in longer compilation times.
The O3 optimizations may not cause higher performance unless loop and
memory access transformations take place. The optimizations may slow
down code in some cases compared to O2 optimizations.
The O3 option is recommended for applications that have loops that heavily
use floating-point calculations and process large data sets.

The -par-schedule option lets you specify a scheduling algorithm or a tuning method for loop iterations.
It specifies how iterations are to be divided among the threads of the team. This option affects performance
tuning and can provide better performance during auto-parallelization.

-par-schedule-static=n tells the compiler to divide iterations into contiguous pieces (chunks) of size n.
The chunks are assigned to threads in the team in a round-robin fashion in the order of the thread number.
Note that the last chunk to be assigned may have a smaller number of iterations. If n is not specified,
the iteration space is divided into chunks that are approximately equal in size, and each thread is assigned at most one chunk.

This option enables additional interprocedural optimizations for single
file compilation. These optimizations are a subset of full intra-file
interprocedural optimizations. One of these optimizations enables the
compiler to perform inline function expansion for calls to functions
defined within the current source file.

This option instructs the compiler to analyze and transform the program so that
64-bit pointers are shrunk to 32-bit pointers, and 64-bit longs (on Linux) are
shrunk into 32-bit longs wherever it is legal and safe to do so.
In order for this option to be effective the compiler must be able to optimize using
the -ipo/-Qipo option and must be able to analyze all library/external calls the program makes.

This option requires that the size of the program executable never exceeds 2^32 bytes and all
data values can be represented within 32 bits. If the program can run correctly in a 32-bit system,
these requirements are implicitly satisfied. If the program violates these size restrictions,
unpredictable behavior might occur.

To override one of the options set by /fast, specify that option after the
-fast option on the command line. The exception is the xT or QxT option
which can't be overridden. The options set by /fast may change from
release to release.

Code is optimized for Intel(R) processors with support for SSE 4.2 instructions.
The resulting code may contain unconditional use of features that are not supported
on other processors. This option also enables new optimizations in addition to
Intel processor-specific optimizations including advanced data layout and code
restructuring optimizations to improve memory accesses for Intel processors.

Do not use this option if you are executing a program on a processor that
is not an Intel processor. If you use this option on a non-compatible processor
to compile the main program (in Fortran) or the function main() in C/C++, the
program will display a fatal run-time error if they are executed on unsupported
processors.

Code is optimized for Intel(R) processors with support for SSE 4.1 instructions.
The resulting code may contain unconditional use of features that are not supported
on other processors. This option also enables new optimizations in addition to
Intel processor-specific optimizations including advanced data layout and code
restructuring optimizations to improve memory accesses for Intel processors.

Do not use this option if you are executing a program on a processor that
is not an Intel processor. If you use this option on a non-compatible processor
to compile the main program (in Fortran) or the function main() in C/C++, the
program will display a fatal run-time error if they are executed on unsupported
processors.

Code is optimized for Intel(R) Atom(TM) processors.
The resulting code may contain unconditional use of features that are not supported
on other processors. This option also enables new optimizations in addition to
Intel processor-specific optimizations including advanced data layout and code
restructuring optimizations to improve memory accesses for Intel processors.

Do not use this option if you are executing a program on a processor that
is not an Intel processor. If you use this option on a non-compatible processor
to compile the main program (in Fortran) or the function main() in C/C++, the
program will display a fatal run-time error if they are executed on unsupported
processors.

Code is optimized for Intel(R) processors with support for SSSE3 instructions.
The resulting code may contain unconditional use of features that are not supported
on other processors. This option also enables new optimizations in addition to
Intel processor-specific optimizations including advanced data layout and code
restructuring optimizations to improve memory accesses for Intel processors.

Do not use this option if you are executing a program on a processor that
is not an Intel processor. If you use this option on a non-compatible processor
to compile the main program (in Fortran) or the function main() in C/C++, the
program will display a fatal run-time error if they are executed on unsupported
processors.

Code is optimized for Intel Pentium M and compatible Intel processors. The
resulting code may contain unconditional use of features that are not supported
on other processors. This option also enables new optimizations in addition to
Intel processor-specific optimizations including advanced data layout and code
restructuring optimizations to improve memory accesses for Intel processors.

Do not use this option if you are executing a program on a processor that
is not an Intel processor. If you use this option on a non-compatible processor
to compile the main program (in Fortran) or the function main() in C/C++, the
program will display a fatal run-time error if they are executed on unsupported
processors.

Code is optimized for Intel Pentium 4 and compatible Intel processors;
this is the default for Intel?EM64T systems. The resulting code may contain
unconditional use of features that are not supported on other processors.

Tells the auto-parallelizer to generate multithreaded code for loops that can be safely executed in parallel.
To use this option, you must also specify option O2 or O3. The default numbers of threads spawned is equal to
the number of processors detected in the system where the binary is compiled. This can be changed by setting the
environment variable OMP_NUM_THREADS

The use of -Qparallel to generate auto-parallelized code requires support libraries that are
dynamically linked by default. Specifying libguide.lib on the link line, statically links in
libguide.lib to allow auto-parallelized binaries to work on systems which do not have the dynamic version
of this library installed.

The use of -Qparallel to generate auto-parallelized code requires spport libraries that are
dynamically linked by default. Specifying libguide40.lib on the link line, statically links in
libguide40.lib to allow auto-parallelized binaries to work on systems which do not have the
dynamic version of this library installed.

-no-prec-div enables optimizations that give slightly less precise results
than full IEEE division.

When you specify -no-prec-div along with some optimizations, such as
-xN and -xB (Linux) or /QxN and /QxB (Windows),
the compiler may change floating-point division computations into
multiplication by the reciprocal of the denominator.
For example, A/B is computed as A * (1/B) to improve the speed of the
computation.

However, sometimes the value produced by this transformation is
not as accurate as full IEEE division. When it is important to have fully
precise IEEE division, do not use -no-prec-div.
This will enable the default -prec-div and the result will be more accurate,
with some loss of performance.

Instrument program for profiling for the first phase of
two-phase profile guided optimization. This instrumentation gathers information
about a program's execution paths and data values but does not gather
information from hardware performance counters. The profile instrumentation
also gathers data for optimizations which are unique to profile-feedback
optimization.

Instructs the compiler to produce a profile-optimized
executable and merges available dynamic information (.dyn)
files into a pgopti.dpi file. If you perform multiple
executions of the instrumented program, -prof-use merges
the dynamic information files again and overwrites the
previous pgopti.dpi file.
Without any other options, the current directory is
searched for .dyn files

Enable use of ANSI aliasing rules in optimizations. This option tells the compiler to assume that the program
adheres to ISO C Standard aliasability rules.

If your program adheres to these rules, then this option allows the compiler to optimize more aggressively.
If it doesn't adhere to these rules, then it can cause the compiler to generate incorrect code.

The compiler adds setup code in the C/C++/Fortran main function to enable optimal malloc algorithms:

n=0: Default, no changes to the malloc options. No call to mallopt() is made.

n=1: M_MMAP_MAX=2 and M_TRIM_THRESHOLD=0x10000000. Call mallopt with the two settings.

n=2: M_MMAP_MAX=2 and M_TRIM_THRESHOLD=0x40000000. Call mallopt with these two settings.

n=3: M_MMAP_MAX=0 and M_TRIM_THRESHOLD=-1. Call mallopt with these two settings. This
will cause use of sbrk() calls instead of mmap() calls to get memory from the system.

The two parameters, M_MMAP_MAX and M_TRIM_THRESHOLD, are described below

Function: int mallopt (int param, int value) When calling mallopt, the param argument
specifies the parameter to be set, and value the new value to be set. Possible choices
for param, as defined in malloc.h, are:

M_TRIM_THRESHOLD This is the minimum size (in bytes) of the top-most, releasable chunk
that will cause sbrk to be called with a negative argument in order to return memory
to the system.

M_TOP_PAD This parameter determines the amount of extra memory to obtain from the system
when a call to sbrk is required. It also specifies the number of bytes to retain when
shrinking the heap by calling sbrk with a negative argument. This provides the necessary
hysteresis in heap size such that excessive amounts of system calls can be avoided.

M_MMAP_THRESHOLD All chunks larger than this value are allocated outside the normal heap,
using the mmap system call. This way it is guaranteed that the memory for these chunks
can be returned to the system on free. Note that requests smaller than this threshold
might still be allocated via mmap.

M_MMAP_MAX The maximum number of chunks to allocate
with mmap. Setting this to zero disables all use of mmap.

Enables cache/bandwidth optimization for stores under conditionals (within vector loops)
This option tells the compiler to perform a conditional check in a vectorized loop.
This checking avoids unnecessary stores and may improve performance by conserving bandwidth.

Enable compiler to generate runtime control code for effective automatic parallelization.
This option generates code to perform run-time checks for loops that have symbolic loop bounds.
If the granularity of a loop is greater than the parallelization threshold, the loop will be
executed in parallel. If you do not specify this option, the compiler may not parallelize loops
with symbolic loop bounds if the compile-time granularity estimation of a loop cannot ensure
it is beneficial to parallelize the loop.

Multi-versioning is used for generating different versions of the loop based on run time dependence testing,
alignment and checking for short/long trip counts. If this option is turned on, it will trigger more versioning
at the expense of creating more overhead to check for pointer aliasing and scalar replacement.

This option specifies that the main program is not written in Fortran.
It is a link-time option that prevents the compiler from linking for_main.o
into applications.

For example, if the main program is written in C and calls a Fortran subprogram,
specify -nofor-main when compiling the program with the ifort command.
If you omit this option, the main program must be a Fortran program.

One or more of the following settings may have been set. If so, the "General Notes" section of the
report will say so; and you can read below to find out more about what these settings mean.

Operating Modes (Default=Custom Mode):

Operating Mode is a BIOS setting which allows you to select the appropriate mode of operation based on the specific user environment. This menu option is provided to allow optimization of the system for minimum power usage, maximum efficiency, or maximum performance.

Values for this BIOS setting can be:

Acoustic/minimum power mode: Strives to minimize the absolute power consumption of the system while it is operating.

Efficiency mode: Maximizes the performance/watt efficiency as measured by power benchmarks. It provides the best features for reducing power and increasing performance without a detrimental effect on either one.

Performance mode: Maximizes the absolute performance of the system by setting all programmable bus speeds to their maximum rated frequencies without regard for power.

Custom mode: A combination of Efficiency mode and Performance mode.

KMP_STACKSIZE

Specify stack size to be allocated for each thread.

KMP_AFFINITY

KMP_AFFINITY = < physical | logical >, starting-core-id
specifies the static mapping of user threads to physical cores. For example,
if you have a system configured with 8 cores, OMP_NUM_THREADS=8 and
KMP_AFFINITY=physical,0 then thread 0 will mapped to core 0, thread 1 will be mapped to core 1, and
so on in a round-robin fashion.

KMP_AFFINITY = granularity=fine,scatter
The value for the environment variable KMP_AFFIINTY affects how the threads from an auto-parallelized program are scheduled across processors.
Specifying granularity=fine selects the finest granularity level, causes each OpenMP thread to be bound to a single thread context.
This ensures that there is only one thread per core on cores supporting HyperThreading Technology
Specifying scatter distributes the threads as evenly as possible across the entire system.
Hence a combination of these two options, will spread the threads evenly across sockets, with one thread per physical core.

OMP_NUM_THREADS

Sets the maximum number of threads to use for OpenMP* parallel regions if no
other value is specified in the application. This environment variable
applies to both -openmp and -parallel (Linux and Mac OS X) or /Qopenmp and /Qparallel (Windows).
Example syntax on a Linux system with 8 cores:
export OMP_NUM_THREADS=8

Hardware Prefetch:

This BIOS option allows the enabling/disabling of a processor mechanism to
prefetch data into the cache according to a pattern-recognition algorithm.

In some cases, setting this option to Disabled may improve
performance. Users should only disable this option
after performing application benchmarking to verify improved
performance in their environment.

Adjacent Sector Prefetch:

This BIOS option allows the enabling/disabling of a processor mechanism to
fetch the adjacent cache line within a 128-byte sector that contains
the data needed due to a cache line miss.

In some cases, setting this option to Disabled may improve
performance. Users should only disable this option
after performing application benchmarking to verify improved
performance in their environment.

Power C-States:

Enabling the CPU States causes the CPU to enter a low-power mode when the CPU is idle.

Turbo Mode:

Enabling turbo mode can boost the overall CPU performance when all CPU cores are not being
fully utilized.

Turbo Boost:

This BIOS option can be set to Power Optimized or Traditional. When Power Optimized is
selected, Intel Turbo Boost Technology engages after Performance state P0 is sustained
for longer than two seconds. When Traditional is selected, Intel Turbo Boost Technology is
engaged even for P0 requests less than two seconds.

Demand Scrub:

Demand scrub occurs when the memory controller reads memory for data or instructions and
the demand scrubbing logic detects a correctable error. Correct data is forwarded to the
memory controller and written to memory. With demand scrubbing disabled, the data being read
into the memory controller will be corrected by the ECC logic but no write to main memory
occurs. Since the data is not corrected in memory, subsequent reads to the same
will need to be corrected causing a performance impact.

High Bandwidth:

Enabling this option allows the chipset to defer memory transactions and process them out of order for optimal performance.

ulimit -s <n>

Sets the stack size to n kbytes, or unlimited to allow the stack size
to grow without limit.

When running multiple copies of benchmarks, the SPEC config file feature
submit is sometimes used to cause individual jobs to be bound to
specific processors. This specific submit command is used for Linux.
The description of the elements of the command are:

/usr/bin/taskset [options] [mask] [pid | command [arg] ... ]:
taskset is used to set or retrieve the CPU affinity of a running
process given its PID or to launch a new COMMAND with a given CPU
affinity. The CPU affinity is represented as a bitmask, with the
lowest order bit corresponding to the first logical CPU and highest
order bit corresponding to the last logical CPU. When the taskset
returns, it is guaranteed that the given program has been scheduled
to a legal CPU.
The default behavior of taskset is to run a new command with a
given affinity mask:
taskset [mask] [command] [arguments]

$MYMASK: The bitmask (in hexadecimal) corresponding to a specific
SPECCOPYNUM. For example, $MYMASK value for the first copy of a
rate run will be 0x00000001, for the second copy of the rate will
be 0x00000002 etc. Thus, the first copy of the rate run will have a
CPU affinity of CPU0, the second copy will have the affinity CPU1
etc.

$command: Program to be started, in this case, the benchmark instance
to be started.

Using numactl to bind processes and memory to cores

For multi-copy runs or single copy runs on systems with multiple sockets, it is advantageous to bind a process to a particular core. Otherwise, the OS may arbitrarily move your process from one core to another. This can affect performance. To help, SPEC allows the use of a "submit" command where users can specify a utility to use to bind processes. We have found the utility 'numactl' to be the best choice.

numactl runs processes with a specific NUMA scheduling or memory placement policy. The policy is set for a command and inherited by all of its children. The numactl flag "--physcpubind" specifies which core(s) to bind the process. "-l" instructs numactl to keep a process memory on the local node while "-m" specifies which node(s) to place a process memory. For full details on using numactl, please refer to your Linux documentation, 'man numactl'

submit= $[top]/mysubmit.pl $SPECCOPYNUM "$command"

On Xeon 74xx series processors, some benchmarks at peak will run n/2 copies on a system with n logical processors.
The mysubmit.pl script assigns each copy in such a way that no two copies will share an L2 cache, for optimal performance.
The script looks in /proc/cpuinfo to come up with the list of cores that will satisfy this requirement.
The source code is shown below.