This page describes an older version of the product. The latest stable version is 15.2.

Infrastructure

Check your Infrastructure First

No matter what kind of optimization you perform, you cannot ignore your infrastructure. Therefore, you must verify that you have the following:

Sufficient physical and virtual memory

Sufficient disk speed

A tuned database

Sufficient CPU power to handle the load

Network cards configured for speed

A JVM with a fast JIT

Max Processes and File Descriptors/Handlers Limit

Linux

Linux has a Max Processes per user, as well as the limit of file descriptors allowed (which relates to processes, files, sockets and threads). This feature allows you to control the number of processes an existing user on the server may be authorized to have.

To improve performance and stability, you must set the limit of processes for the super-user root to be at least 8192, but note that 32,000:

ulimit -u 32000

Before deciding about the proper values of the file descriptors, a further testing and monitoring is required on the actual environment. 8K,16K or 32K is used just an example.

Note

Verify that you set the ulimit using the -n option e.g. ulimit -n 8192, rather than ulimit 8192. ulimit defaults to ulimit -f. If no parameter is set, it sets the maximum file size in 512k blocks, which might cause a fatal process crash

How do I configure the File Descriptors on Linux?

In /etc/system file, the descriptors hard limit should be set (8192), and the file descriptors soft limit should be increased from 1024 to 8192 as shown below:

set rlim_fd_max=8192
set rlim_fd_cur=8192

Edit /etc/system with root access and reboot the server. After reboot, please, run the following in the application account:
ulimit -n
It should report 8192.

To change the default value, modify the /etc/security/limits.conf file.

Tip

Modify the ulimit value when having many concurrent users accessing the space.

Windows

Windows 2003 has no parameter dealing directly with the number of file handles, it is not explicitly limited, but file handles allocations take part of heap shared section which is relatively small (default 512KB). Heap being exhausted might lead to the application failure.

How do I configure the File Handlers on Windows?

To increase it run regedit - HKEY_LOCAL_MACHINE->SYSTEM->CurrentControlSet->Control->Session Manager->Subsystems:
in the key “Windows” find “SharedSection=1024,3072,512”, where 512KB is the size of heap shared section for the processes running in the background. The value should be increased, the recommendation is to increase it initially to 1024KB, max value 3072. Reboot is necessary to enable the new setting.

One of reports in Sun bug database describes the fixed bug (fix done in JVM 1.5 RC1) which mention file handles limit 2035 per jvm - the case has the test java code attached. It could be used to check the influence of the registry reconfiguration.

TCP tuning

Linux

TCP_KEEPALIVE_TIME

Description: Determines how often to send TCP keepalive packets to keep an connection alive if it is currently unused
Should be changed in order to secure fast fail-over in case of network failure (e.g. router failure).
Set:

echo 1 > /proc/sys/net/ipv4/tcp_keepalive_time

Default value: 7200 seconds (2 hours)
Recommended value: 1 seconds

TCP_KEEPALIVE_INTERVAL

Description: Determines the wait time between isAlive interval probes.
Set: