Tools and techniques for performance tuning in Linux

Too Many Cache Misses?

The performance of the system is highly dependent on the effectiveness of the cache. Any cache miss will degrade performance and lead to a CPU stall.

Sometimes a cache miss is caused by frequently used fields located in data structures that span across the cache line. Oprofile can diagnose this kind of problem.

Again, using the Intel Core 2 processor as an example, choose the event LLC_MISSES to profile all the L2 cache requests that miss the L2 cache. For the exact event to use, you should invoke opcontrol --list-events to read about the details of each event type available for your CPU.

Locking Problems

A high context switching rate, relative to the number of running processes, is undesirable and could indicate a lock contention problem. To determine the most contended locks, enable the lock statistics in the kernel, which will give you insight into what is causing the lock contention. To do so, use the lock_stat feature in 2.6.23 or later kernels. First, you'll need to recompile the kernel with the CONFIG_LOCK_STAT=y option. Then, before running the workloads, clear the statistics with:

#echo 0 > /proc/lock_stat

After running the workload, review the lock statistics with the following command:

#cat /proc/lock_stat

The output of the preceding command is a list of locks in the kernel sorted by the number of contentions. For each lock, you will see the number of contentions, as well as the shortest, maximum, and cumulative wait time for a contention. In addition, you will see the number of acquisitions, as well as the minimum, maximum, and cumulative hold times for a lock. The top call sites of the lock are also given to let you locate quickly where in the kernel the lock occurs.

It is worth noting that the lock statistics infrastructure incurs overhead. Once you have finished hunting for locks, you should disable this feature to maximize performance.

Excessive Latency

Program throughput that is inconsistent and sputters, applications that seem to go to sleep before coming alive, and a lot of processes under the blocked column in vmstat are often signs of latency in the system. LatencyTOP is a new tool that helps diagnose latency issues.

Starting with the 2.6.25 kernel, you can compile LatencyTOP support into the kernel by enabling the CONFIG_HAVE_LATENCYTOP_SUPPORT=y and CONFIG_LATENCYTOP=y options in the kernel configuration. After booting up the kernel with LatencyTOP capability, you can trace latency in the workload with a userspace latency tracing tool from the LatencyTOP website [2]. To start, compile the tool, do a make install of the LatencyTOP program, and run the following as root:

#./latencytop

The LatencyTOP program's top screen (Figure 2) provides a periodic dump of the top causes that lead to processes being blocked, sorted by the maximum blocked time for each cause. Also, you'll find information on the percentage of time a particular cause contributed to the total blocked time. The bottom screen provides similar information on a per-process basis.

Related content

The changelog for kernel 2.6.25.11 includes just a single entry, however, it seems to be so important that the Kernel Stable Team urgently advises users to upgrade the kernel on 64 bit multiple user systems.

A controversial patch for the imminent kernel 2.6.25 is causing much debate in the developer community: in a similar move to one he made two years ago, the well-known kernel developer Greg Kroah-Hartman has submitted a patch that prevents closed source USB drivers from using the kernel's USB driver API.

Linus Torvalds has released the new 2.6.25 kernel just slightly behind schedule. Besides improvements to the CFS scheduler and a plethora of new drivers, the kernel also introduces a political aspect: it debars non-GPLd USB drivers.