Cloud System Automation and configuration management

Menu

Monthly Archives: April 2010

Introduction

This article presents a set of tools, system settings, and tuning tips for Java server applications that run on and scale across 2 to 64 CPU Sun Enterprise servers. This information was assembled by engineers with many years of experience tuning a variety of commercial server-side Java applications on Solaris.

Analysis Tools

The table below lists the performance analysis tools covered in this article. The tools are distinguished by software layer. In addition to performance issues, many of these tools can be used to detect other types of bottlenecks.

Click on a Name or a Parameter to link to a particular topic. Many tool descriptions provide sample output, suggestions for interpreting output results, tips on improving output results, and links to related sites.

Solaris 8 Tools

mpstat

The mpstat utility is a useful tool to monitor CPU utilization, especially with multithreaded applications running on multiprocessor machines, which is a typical configuration for enterprise solutions.

mpstat with an argument between 5 seconds to 10 seconds will be quite non-intrusive to monitor; larger arguments, such as 60 seconds, might be suitable for certain applications. Statistics are gathered for each clock tick.

An interval that is smaller than 5 or 10 seconds will be more difficult to analyze. A larger interval might provide a means of smoothing the data by removing spikes that could mislead you during analysis.

What to look for

Note the much higher intr and ithr values for CPU#20 and CPU#21. Solaris will select some CPUs to handle the system interrupts. Which CPUs and the number that are chosen depend on the I/O devices attached to the system, the physical location of those devices, and whether interrupts have been disabled on a CPU (psradmin command).

intr – interrupts

intr – thread interrupts (not including the clock interrupts)

csw – Voluntary Context switches. When this number slowly increases, and the application is not IO bound, it may indicate a mutex contention.

icsw – Involuntary Context switches. When this number increases past 500, the system is under a heavy load.

smtx – if smtx increases sharply, for instance from 50 to 500, it is a sign of a system resource bottleneck (ex., network or disk).

Usr, sys and idl – Together, all three columns represent CPU saturation. A well-tuned application under full load (0% idle) should fall within 80% to 90% usr, and 20% to 10% sys times, respectively. A smaller percentage value for sys reflects more time for user code and fewer preemptions, which result in greater throughput for a Java application.

Things to try

Do not include CPU(s) handling interrupts in processor binds of processor sets. In the above example, CPU#20 and CPU#29 are handling interrupts. If you wanted to run 14 instances of your application, and you get the best performance from one instance from 2 CPUs, then it is reasonable to expect that creating 14 2CPU processor sets would yield the best performance. The ideal solution would be to create 13 processor sets, which don’t include the interrupt-handling CPUs, and bind 13 of the processes to the 13 processor sets. The last process would be started and allowed to run on the remaining CPUs. It is important to make available to your application as many CPUs as it can efficiently use.

Do you see increasing csw? For a Java application, an increasing csw value will most likely have to do with network use. A common cause for a high csw value is the result of having created too many socket connections–either by not pooling connections or by handling new connections inefficiently. If this is the case you would also see a high TCP connection count when executing netstat -a | wc -l (Refer to the netstat section).

Do you see increasing icsw? A common cause of this is preemption, most likely because of an end of time slice on the CPU. For a Java application, this could be a sign that there is room for improvement in code optimization.

iostat

The iostat tool gives statistics on the disk I/O subsystem. The iostat command has many options. More information can be found in the man pages. The following options provide information on locating I/O bottlenecks.

What to look for

%b – Percentage of time the disk is busy (transactions in progress). Average %b values over 25 could be a bottleneck.

%w – Percentage of time there are transactions waiting for service (queue non-empty).

asvc_t – Reports on average response time of active transactions, in milliseconds. It is mislabeled asvc_t; it is the time between a user process issuing a read and the read completing. Consistent values over 30ms could indicate a bottleneck.

Things to try

For a Java application, disk bottlenecks can often be addressed by using software caches. An example of a software cache would be a JDBC result set cache, or a generated pages cache. Disk reads and writes are slow; therefore, limiting disk access is a sure way to improve performance. Problems with too much disk access are often hidden when running on Solaris because of its own file system caches. Even with Solaris file system caches, using software caches to prevent files ystem and operating system overhead is recommended.

Mount file systems with options. (Refer to the mount_ufs man page). Several mount options may eliminate some disk load. Which options to try depends highly on the type of data. One possible option is noatime, which specifies the ufs file system not to update the access time on files. This may reduce load of systems accessing read-only files or doing error logging.

# mount -F ufs -o noatime /<your_volume>

Add more disks to the file system. If you are using a single disk file system, upgrading to a hardware or software RAID is the next logical step. Hardware RAID is significantly faster than software RAID and is highly suggested. A software RAID solution would add additional computational (CPU) load to the system.

Change block size. Depending on storage hardware and application behavior, there may be a better block size to use besides the ufs default of 8192k. Look at the man pages for mkfs and newfs to determine ways to change block size.

netstat

The netstat tool gives statistics on the network subsystem. It can be used to analyze many aspects of the network subsystem, two of which are the TCP/IP kernel module and the interface bandwidth. An overview of both uses is below.

netstat -I hme0 10

These netstat options are used to analyze interface bandwidth. The upper bound (max) of the current throughput can be calculated from the output. The upper bound is reported because the netstat output reports the metric of packets, which don’t necessarily have to be their maximum size. The upper bound of the bandwidth can be calculated using the following equation:

What to look for

colls – collisions. If your network is not switched, then a low level of collisions is expected. As the network becomes increasingly saturated, collision will increase and eventually will become a bottleneck. The best solution for collisions is a switched network.

errs – errors. The presence of errors could indicate device errors. If your network is switched, errors indicate that you are nearly consuming the bandwidth capacity of your network. The solution to this problem is to give the system more bandwidth, which can be achieved through more network interfaces or a network bandwidth upgrade. This is highly dependent on your particular network architecture.

Things to try

For a Java application, network saturation is difficult to address besides increasing bandwidth. If network saturation is occurring quickly (saturation at less than 8CPUs for an application server running on a 100mbit Ethernet), then an investigation to ensure conservative network usage is a good first step.

Increase network bandwidth. If your network is not switched, the best step to take is to upgrade to a switched network. If your network is switched, first check if more network interfaces are a possible solution, otherwise upgrade to a higher bandwidth network.

netstat -sP tcp

These netstat options are used to analyze the TCP kernel module. Many of the fields reported represent fields in the kernel module that indicate bottlenecks. These bottlenecks can be addressed using the ndd command and the tuning parameters referenced in the /etc/rc2.d/S69inet Section

What to look for

tcpListenDrop – If after several looks at the command output the tcpListenDrop continues to increase, it could indicate a problem with queue size.

Things to try

Increase Java application thread count. A possible cause of increasing tcpListenDrop is the application throughput being bottlenecked by the number of executing threads. At this point increasing application threads may be a good thing to try.

Increase queue size. Increase the request queue sizes using ndd. More information on other ndd commands referenced in the /etc/rc2.d/S69inet Section

ndd -set /dev/tcp tcp_conn_req_max_q 1024

ndd -set /dev/tcp tcp_conn_req_max_q0 4096

netstat -a | grep <your_hostname> | wc -l

Running this command gives a rough count of socket connections on the system. There is a limit of how many connections can be open at one time; therefore, it is a good tool to use when looking for bottlenecks.

netstat -a | grep <your_hostname> | wc -l Output

#netstat -a | wc -l
34567

What to look for

socket count – If the number returned is greater than 20,000 then the number of socket connections could be a possible bottleneck.

Things to try

For a Java application, a common cause of too many sockets is inefficient use of sockets. It is common practice in Java applications to create a socket connection each time a request is made. Creating and destroying socket connections is not only expensive, but can cause unnecessary system overhead by creating too many sockets. Creating a connection pool may be a good solution to investigate. For an example of connection pool use, refer to Advanced Programming for the Java 2 Platform, Chapter 8.

Decrease point where number of anonymous socket connections start.

ndd -set /dev/tcp tcp_smallest_anon_port 1024

Decrease the time a TCP connection stays in TIME_WAIT.

ndd -set /dev/tcp tcp_time_wait_interval 60000

verbose:gc

The java -verbose:gc option is a great tool for quickly diagnosing garbage collection (GC) bottlenecks. Calculate the total of all the time spent in GC by adding the time output from -verbose:gc. If the fraction (time in GC)/( elapsed time) is a high fraction greater than 0.2, then GC is most likely a problem. If this fraction is less than 0.2, then GC is not the issue. For more detail information about JVM Garbage Collection, see Tuning Garbage Collection with the 1.3.1 Java Virtual Machine.

Java Application

Tnf traces

This is a great tool for both profiling and debugging a Java Application. On a Solaris system refer to the Manual pages for tracing, TNF_PROBE, tnfdump, tnfmerge and prex. This will help to get an overall understanding of inserting the probes in the source code. The manual pages have been written with C/C++ sources in view.

Step 7: Use the tnfdump on the output trace file to get the ASCII output, or use the tnfmerge to merge trace files. For information of TNF (Trace Normal Form) TNF, including TNFView and tnfmerge, refer to Performance Profiling Using TNF.

JVMPI

The JVMPI (Java Virtual Machine Profiler Interface) is a two-way function call interface between the Java virtual machine and an in-process profiler agent. On one hand, the virtual machine notifies the profiler agent of various events, corresponding to, for example, heap allocation, thread start, etc. On the other hand, the profiler agent issues controls and requests for more information through the JVMPI. For example, the profiler agent can turn on/off a specific event notification based on the needs of the profiler front-end. A detailed overview of JVMPI can be found at Java Virtual Machine Profiler Interface (JVMPI).

Commercial Profiling Tools

Commercial and public source profiling tools are mentioned here. All of them use the JVMPI.

Tuning Parameters

Solaris 8 Tuning Parameters

Below are the Solaris 8 and JVM tuning parameters found to work best with server-side Java applications. The tuning parameters are listed with a brief description. A more in-depth look at when to use these parameters is discussed in the Analysis Tools and Tuning Process sections.

/etc/system

The table below is a list of /etc/system tuning parameters used during the performance study. The changes are applied by appending each to the /etc/system file and rebooting the system.

/etc/system Option

Description

set rlim_fd_max=8192

“Hard” limit on file descriptors that a single process might have open. To override this limit requires superuser privilege.

set tcp:tcp_conn_hash_size=8192

Controls the hash table size in the TCP module for all TCP connections.

set autoup=900

Along with tune_t_flushr, autoup controls the amount of memory examined for dirty pages in each invocation and frequency of file system sync operations.

The value of autoup is also used to control whether a buffer is written out from the free list. Buffers marked with the B_DELWRI flag (file content pages that have changed) are written out whenever the buffer has been on the list for longer than autoup seconds.

Increasing the value of autoup keeps the buffers around for a longer time in memory.

set tune_t_fsflushr=1

Specifies the number of seconds between fsflush invocations.

set rechoose_interval=150

Number of clock ticks before a process is deemed to have lost all affinity for the last CPU it ran on. After this interval expires, any CPU is considered a candidate for scheduling a thread. This parameter is relevant only for threads in the timesharing class. Real-time threads are scheduled on the first available CPU.

/etc/rc2.d/S69inet

Below is a list of TCP kernel tuning parameters. These are known TCP tuning parameters for high throughput Java servers. The parameters can be applied by executing each line individually with root privileges, or appending each to the /etc/rc2.d/S69inet file and rebooting the system.

The time in milliseconds a TCP connection stays in TIME-WAIT state. Refer to RFC 1122, 4.2.2.13 for more information.

ndd -set /dev/tcp tcp_keepalive_interval 900000

The time in milliseconds a TCP connection stays in KEEP-ALIVE state. Refer to RFC 1122, 4.2.2.13 for more information.

ndd -set /dev/tcp tcp_conn_req_max_q 1024

The default maximum number of pending TCP connections for a TCP listener waiting to be accepted by accept(SOCKET).

ndd -set /dev/tcp tcp_conn_req_max_q0 4096

The default maximum number of incomplete (three-way handshake not yet finished) pending TCP connections for a TCP listener.

Refer to RFC 793 for more information on TCP three-way handshake.

ndd -set /dev/tcp tcp_ip_abort_interval 60000

The default total retransmission timeout value for a TCP connection in milliseconds. For a given TCP connection, if TCP has been re-transmitting for tcp_ip_abort_interval period and it has not received any acknowledgment from the other endpoint during this period, TCP closes this connection.

ndd -set /dev/tcp tcp_smallest_anon_port 1024

The default port number where anonymous port allocation is allowed (default: ?).

Java Application Tuning Parameters

Number of Execution Threads

A general rule for thread count is to use as few threads as possible. The JVM performs best with the fewest busy threads. A good starting point for thread count can be found with the following equations.

It is important to remember that these equations give a good starting point for thread count tuning, not the best value for thread count for your application. The number of execution Threads can greatly influence performance; therefore, the proper sizing of this value is very important.

Number of Database Connections

The number of database connections, commonly known as a connection or resource pool, is closely tied to the number of execution threads. A rule of thumb is to match the number of database connections to the number of execute threads. This is a good starting point for finding the correct number of database connections. Over-configuring this value could cause unnecessary overhead to the database, while under-configuring could tie up all execution threads waiting on database I/O.

(Number of Database Connections) = (Number of Execution Threads)

Software Caches

Many server-side Java applications implement some type of software cache, commonly for JDBC result sets, or commonly generated, dynamic pages. Software caches are the most likely part of an application to cause unnecessary garbage collection overhead resulting from the software cache architecture and the replacement policy of the cache.

Most middle tier applications will have some sort of caching. These caches should be studied with GC in mind to see if they result in greater GC. Choose the architecture and replacement strategy that has lower GC. Careful implementation of caches with garbage collection in mind greatly improves performance simply by limiting garbage.

Java Virtual Machine Tuning Parameters

Below are a few Java Virtual Machine Tuning Parameters that have been found to improve performance. There are many more tuning parameters; the following are examples of what has worked for us. A detailed list of all tuning parameters can be found Java HotSpot VM Options.

Java VM Option

Description

-XX:+UseLWPSynchronization

Use LWP-based instead of thread based synchronization (SPARC only).

-XX:SurvivorRatio=40

Ratio of eden/survivor space size [Solaris: 64, Linux/Windows: 8].

-XX:NewSize=128m
-XX:MaxNewSize=128m

Disable young generation resizing. To do this on Hotspot, simply the size of the young generation to a constant.

The total amount of swap space in bytes currently
allocated for use as backing store.

reserved

The total amount of swap space in bytes not
currently allocated, but claimed by memory mappings
for possible future use.

used

The total amount of swap space in bytes that is
either allocated or reserved.

available

The total swap space in bytes that is currently
available for future reservation and allocation.

The used plus available figures equals total swap space on the system, which includes a portion of physical memory and swap devices (or files).

You can use the amount of swap space available and used (in the swap -s output) as a way to monitor swap space usage over time. If a system’s performance is good, use swap -s to see how much swap space is available. When the performance of a system slows down, check the amount of swap space available to see if it has decreased. Then you can identify what changes to the system might have caused swap space usage to increase.

Keep in mind when using this command that the amount of physical memory available for swap usage changes dynamically as the kernel and user processes lock down and release physical memory.

The swap -l command displays swap space in 512-byte blocks and the swap -s command displays swap space in 1024-byte blocks. If you add up the blocks from swap -l and convert them to Kbytes, it will be less than used + available (in the swap -s output) because swap -l does not include physical memory in its calculation of swap space.