SUSE Linux Enterprise Server11 SP3

System Analysis and Tuning Guide

Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.2 or (at
your option) version 1.3; with the Invariant Section being this copyright
notice and license. A copy of the license version 1.2 is included in the
section entitled “GNU Free Documentation License”.

For SUSE and Novell trademarks, see the Novell Trademark and Service Mark list
http://www.novell.com/company/legal/trademarks/tmlist.html.
All other third party trademarks are the property of their respective
owners. A trademark symbol (®, ™ etc.) denotes a SUSE or Novell
trademark; an asterisk (*) denotes a third party trademark.

All information found in this book has been compiled with utmost attention
to detail. However, this does not guarantee complete accuracy. Neither
SUSE LLC, its affiliates, the authors nor the translators shall be held
liable for possible errors or the consequences thereof.

SUSE Linux Enterprise Server is used for a broad range of usage scenarios in enterprise
and scientific data centers. SUSE has ensured SUSE Linux Enterprise Server is set up in
a way that it accommodates different operation purposes with optimal
performance. However, SUSE Linux Enterprise Server must meet very different demands when
employed on a number crunching server compared to a file server, for
example.

Generally it is not possible to ship a distribution that will by default
be optimized for all kinds of workloads. Due to the simple fact that
different workloads vary substantially in various aspects—most
importantly I/O access patterns, memory access patterns, and process
scheduling. A behavior that perfectly suits a certain workload might t
reduce performance of a completely different workload (for example, I/O
intensive databases usually have completely different requirements
compared to CPU-intensive tasks, such as video encoding). The great
versatility of Linux makes it possible to configure your system in a way
that it brings out the best in each usage scenario.

This manual introduces you to means to monitor and analyze your system. It
describes methods to manage system resources and to tune your system. This
guide does not offer recipes for special scenarios,
because each server has got its own different demands. It rather enables
you to thoroughly analyze your servers and make the most out of them.

General Notes on System Tuning

Tuning a system requires a carefully planned proceeding. Learn which
steps are necessary to successfully improve your system.

The Linux kernel itself offers means to examine every nut, bolt and
screw of the system. This part introduces you to SystemTap, a scripting
language for writing kernel modules that can be used to analyze and
filter data. Collect debugging information and find bottlenecks by
using kernel probes and use perfmon2 to access the CPU's performance
monitoring unit. Last, monitor applications with the help of Oprofile.

Learn how to set up a tailor-made system fitting exactly the server's
need. Get to know how to use power management while at the same time
keeping the performance of a system at a level that matches the current
requirements.

The Linux kernel can be optimized either by using sysctl or via the
/proc file system. This part covers tuning the I/O
performance and optimizing the way how Linux schedules processes. It
also describes basic principles of memory management and shows how
memory management could be fine-tuned to suit needs of specific
applications and usage patterns. Furthermore, it describes how to
optimize network performance.

This part enables you to analyze and handle application or system
crashes. It introduces tracing tools such as strace or ltrace and
describes how to handle system crashes using Kexec and Kdump.

Getting the SUSE Linux Enterprise SDK

Some programs or packages mentioned in this guide are only available from
the SUSE Linux Enterprise SDK. The SDK is an add-on product for
SUSE Linux Enterprise Server and is available for download from
http://www.novell.com/developer/sle_sdk.html.

Many chapters in this manual contain links to additional documentation
resources. This includes additional documentation that is available on the
system as well as documentation available on the Internet.

For an overview of the documentation available for your product and the
latest documentation updates, refer to
http://www.suse.com/doc or to the following section:

We provide HTML and PDF versions of our books in different languages.
The following manuals for users and
administrators are available for this product:

Deployment Guide (↑Deployment Guide)

Shows how to install single or multiple systems and how to exploit the
product inherent capabilities for a deployment infrastructure. Choose
from various approaches, ranging from a local installation or a network
installation server to a mass deployment using a remote-controlled,
highly-customized, and automated installation technique.

Administration Guide (↑Administration Guide)

Covers system administration tasks like maintaining, monitoring, and
customizing an initially installed system.

Security Guide (↑Security Guide)

Introduces basic concepts of system security, covering both local and
network security aspects. Shows how to make use of the product inherent
security software like AppArmor (which lets you specify per program which
files the program may read, write, and execute), and the auditing
system that reliably collects information about any security-relevant
events.

Security and Hardening (↑Security and Hardening)

Deals with the particulars of installing and setting up a secure SUSE Linux Enterprise Server,
and additional post-installation processes required to further secure
and harden that installation. Supports the administrator with
security-related choices and decisions.

An administrator's guide for problem detection, resolution and
optimization. Find how to inspect and optimize your system by means of
monitoring tools and how to efficiently manage resources. Also contains
an overview of common problems and solutions, and of additional help
and documentation resources.

Virtualization with Xen (↑Virtualization with Xen)

Offers an introduction to virtualization technology of your product. It
features an overview of the various fields of application and
installation types of each of the platforms supported by SUSE Linux Enterprise Server as well
as a short description of the installation procedure.

Virtualization with KVM for IBM System z (↑Virtualization with KVM for IBM System z)

Offers an introduction to setting up and managing virtualization with
KVM (Kernel-based Virtual Machine) on SUSE Linux Enterprise Server. Learn how to manage KVM
with libvirt or QEMU. The guide also contains detailed information
about requirements, limitations, and support status.

AutoYaST (↑AutoYaST)

AutoYaST is a system for installing one or more SUSE Linux Enterprise systems automatically
and without user intervention, using an AutoYaST profile that contains
installation and configuration data. The manual guides you through the
basic steps of auto-installation: preparation, installation, and
configuration.

Storage Administration Guide (↑Storage Administration Guide)

Provides information about how to manage storage devices on a SUSE Linux Enterprise Server.

In addition to the comprehensive manuals, several quick start guides are
available:

Installation Quick Start (↑Installation Quick Start)

Lists the system requirements and guides you step-by-step through the
installation of SUSE Linux Enterprise Server from DVD, or from an ISO image.

Linux Audit Quick Start

Gives a short overview how to enable and configure the auditing system
and how to execute key tasks such as setting up audit rules, generating
reports, and analyzing the log files.

Gives a short introduction to LXC (a lightweight
“virtualization” method) and shows how to set up an LXC
host and LXC containers.

Find HTML versions of most product manuals in your installed system under
/usr/share/doc/manual or in the help centers of your
desktop. Find the latest documentation updates at
http://www.suse.com/doc where you can download PDF or HTML
versions of the manuals for your product.

To report bugs for a product component, log in to the Novell Customer Center from
http://www.suse.com/support/ and select My Support+Service Request.

User Comments

We want to hear your comments about and suggestions for this manual and
the other documentation included with this product. Use the User
Comments feature at the bottom of each page in the online documentation
or go to http://www.suse.com/doc/feedback.html and enter
your comments there.

Mail

For feedback on the documentation of this product, you can also send a
mail to doc-team@suse.de. Make sure to include the
document title, the product version, and the publication date of the
documentation. To report errors or suggest enhancements, provide a
concise description of the problem and refer to the respective section
number and page (or URL).

This manual discusses how to find the reasons for performance problems
and provides means to solve these problems. Before you start tuning your
system, you should make sure you have ruled out common problems and have
found the cause (bottleneck) for the problem. You should also have a
detailed plan on how to tune the system, because applying random tuning
tips will not help (and could make things worse).

Before you start tuning your system, try to describe the problem as
exactly as possible. Obviously, a simple and general “The system is
too slow!” is no helpful problem description. If you plan to tune
your Web server for faster delivery of static pages, for example, it
makes a difference whether you need to generally improve the speed or
whether it only needs to be improved at peak times.

Furthermore, make sure you can apply a measurement to your problem,
otherwise you will not be able to control if the tuning was a success or
not. You should always be able to compare “before” and
“after”.

A performance problem often is caused by network or hardware problems,
bugs, or configuration issues. Make sure to rule out problems such as the
ones listed below before attempting to tune your system:

Check /var/log/warn and
/var/log/messages for unusual entries.

Check (using top or ps) whether a
certain process misbehaves by eating up unusual amounts of CPU time or
memory.

Check for network problems by inspecting
/proc/net/dev.

In case of I/O problems with physical disks, make sure it is not caused
by hardware problems (check the disk with the
smartmontools) or by a full disk.

Ensure that background jobs are scheduled to be carried out in times
the server load is low. Those jobs should also run with low priority
(set via nice).

If the machine runs several services using the same resources, consider
moving services to another server.

Finding the bottleneck very often is the hardest part when tuning a
system. SUSE Linux Enterprise Server offers a lot of tools helping you with this task.
See Part II, “System Monitoring” for detailed information on
general system monitoring applications and log file analysis. If the
problem requires a long-time in-depth analysis, the Linux kernel offers
means to perform such analysis. See
Part III, “Kernel Monitoring” for coverage.

Once you have collected the data, it needs to be analyzed. First, inspect
if the server's hardware (memory, CPU, bus) and its I/O capacities (disk,
network) are sufficient. If these basic conditions are met, the system
might benefit from tuning.

Make sure to carefully plan the tuning itself. It is of vital importance
to only do one step at a time. Only by doing so you will be able to
measure if the change provided an improvement or even had a negative
impact. Each tuning activity should be measured over a sufficient time
period in order to ensure you can do an analysis based on significant
data. If you cannot measure a positive effect, do not make the change
permanent. Chances are, that it might have a negative effect in the
future.

There are number of programs, tools, and utilities which you can use to
examine the status of your system. This chapter introduces some of them
and describes their most important and frequently used parameters.

For each of the described commands, examples of the relevant outputs are
presented. In the examples, the first line is the command itself (after
the > or # sign prompt). Omissions are indicated with square brackets
([...]) and long lines are wrapped where necessary.
Line breaks for long lines are indicated by a backslash
(\).

# command -x -y
output line 1
output line 2
output line 3 is annoyingly long, so long that \
we have to break it
output line 4
[...]
output line 98
output line 99

The descriptions have been kept short so that we can include as many
utilities as possible. Further information for all the commands can be
found in the manual pages. Most of the commands also understand the
parameter --help, which produces a brief list of possible
parameters.

While most of the Linux system monitoring tools are specific to monitor a
certain aspect of the system, there are a few “swiss army
knife” tools showing various aspects of the system at a glance.
Use these tools first in order to get an overview and find out which part
of the system to examine further.

vmstat collects information about processes, memory, I/O, interrupts and
CPU. If called without a sampling rate, it displays average values since
the last reboot. When called with a sampling rate, it displays actual
samples:

The first line of the vmstat output always displays average values
since the last reboot.

The columns show the following:

r

Shows the number of processes in the run queue. These processes are
waiting for a free CPU slot to be executed. If the number of
processes in this column is constantly higher than the number of CPUs
available, this is an indication of insufficient CPU power.

b

Shows the number of processes waiting for a resource other than a
CPU. A high number in this column may indicate an I/O problem
(network or disk).

swpd

The amount of swap space (KB) currently used.

free

The amount of unused memory (KB).

inact

Recently unused memory that can be reclaimed. This column is only
visible when calling vmstat with the parameter
-a (recommended).

active

Recently used memory that normally does not get reclaimed. This
column is only visible when calling vmstat with
the parameter -a (recommended).

buff

File buffer cache (KB) in RAM. This column is not visible when
calling vmstat with the parameter
-a (recommended).

cache

Page cache (KB) in RAM. This column is not visible when calling
vmstat with the parameter -a
(recommended).

si

Amount of data (KB) that is moved from swap to RAM per second. High
values over a long period of time in this column are an indication
that the machine would benefit from more RAM.

so

Amount of data (KB) that is moved from RAM to swap per second. High
values over a longer period of time in this column are an indication
that the machine would benefit from more RAM.

bi

Number of blocks per second received from a block device (e.g. a disk
read). Note that swapping also impacts the values shown here.

bo

Number of blocks per second sent to a block device (e.g. a disk
write). Note that swapping also impacts the values shown here.

in

Interrupts per second. A high value indicates a high I/O level
(network and/or disk).

cs

Number of context switches per second. Simplified this means that the
kernel has to replace executable code of one program in memory with
that of another program.

us

Percentage of CPU usage from user processes.

sy

Percentage of CPU usage from system processes.

id

Percentage of CPU time spent idling. If this value is zero over a
longer period of time, your CPU(s) are working to full capacity. This
is not necessarily a bad sign—rather refer to the values in
columns r and b to determine if
your machine is equipped with sufficient CPU power.

wa

If "wa" time is non-zero, it indicates throughput lost due to waiting
for I/O. This may be inevitable, for example, if a file is being read
for the first time, background writeback cannot keep up, and so on.
It can also be an indicator for a hardware bottleneck (network or
hard disk). Lastly, it can indicate a potential for tuning the
virtual memory manager (refer to
Chapter 15, Tuning the Memory Management Subsystem).

sar can generate extensive reports on almost all
important system activities, among them CPU, memory, IRQ usage, IO, or
networking. It can either generate reports on the fly or query existing
reports gathered by the system activity data collector
(sadc). sar and
sadc both gather all their data from the
/proc file system.

sysstat Package

sar and sadc are part of
sysstat package. You need to install the
package either with YaST, or with zypper in
sysstat.

If you want to monitor your system about a longer period of time, use
sadc to automatically collect the data. You can read
this data at any time using sar. To start
sadc, simply run /etc/init.d/boot.sysstat
start. This will add a link to
/etc/cron.d/ that calls sadc
with the following default configuration:

All available data will be collected.

Data is written to /var/log/sa/saDD, where
DD stands for the current day. If a file
already exists, it will be archived.

The summary report is written to
/var/log/sa/sarDD, where
DD stands for the current day. Already
existing files will be archived.

Data is collected every ten minutes, a summary report is generated
every 6 hours (see /etc/sysstat/sysstat.cron).

The data is collected by the /usr/lib64/sa/sa1
script (or /usr/lib/sa/sa1 on 32-bit systems)

The summaries are generated by the script
/usr/lib64/sa/sa2 (or
/usr/lib/sa/sa2 on 32-bit systems)

If you need to customize the configuration, copy the
sa1 and sa2 scripts and
adjust them according to your needs. Replace the link
/etc/cron.d/sysstat with a customized copy of
/etc/sysstat/sysstat.cron calling your scripts.

To generate reports on the fly, call sar with an
interval (seconds) and a count. To generate reports from files specify
a filename with the option -f instead of interval and
count. If filename, interval and count are not specified,
sar attempts to generate a report from
/var/log/sa/saDD, where
DD stands for the current day. This is the
default location to where sadc writes its data.
Query multiple files with multiple -f options.

Find examples for useful sar calls and their
interpretation below. For detailed information on the meaning of each
column, please refer to the man (1) of
sar. Also refer to the man page for more options and
reports—sar offers plenty of them.

When called with no options, sar shows a basic
report about CPU usage. On multi-processor machines, results for all
CPUs are summarized. Use the option -P ALL to also
see statistics for individual CPUs.

If the value for %iowait (percentage of the CPU
being idle while waiting for I/O) is significantly higher than zero
over a longer period of time, there is a bottleneck in the I/O system
(network or hard disk). If the %idle value is zero
over a longer period of time, your CPU(s) are working to full
capacity.

The majflt/s (major faults per second) column shows
how many pages are loaded from disk (swap) into memory. A large number
of major faults slows down the system and is an indication of
insufficient main memory. The %vmeff column shows
the number of pages scanned (pgscand/s) in relation
to the ones being reused from the main memory cache or the swap cache
(pgsteal/s). It is a measurement of the efficiency
of page reclaim. Healthy values are either near 100 (every inactive
page swapped out is being reused) or 0 (no pages have been scanned).
The value should not drop below 30.

If your machine uses multiple disks, you will receive the best
performance, if I/O requests are evenly spread over all disks. Compare
the Average values for tps,
rd_sec/s, and wr_sec/s of all
disks. Constantly high values in the svctm and
%util columns could be an indication that the
amount of free space on the disk is insufficient.

sar reports are not always easy to parse for humans.
kSar, a Java application visualizing your sar data,
creates easy-to-read graphs. It can even generate PDF reports. kSar
takes data generated on the fly as well as past data from a file. kSar
is licensed under the BSD license and is available from
http://ksar.atomique.net/.

The utility mpstat examines activities of each
available processor. If your system has one processor only, the global
average statistics will be reported.

With the -P option, you can specify the number of
processors to be reported (note that 0 is the first processor). The
timing arguments work the same way as with the iostat
command. Entering mpstat -P 1 2 5
prints five reports for the second processor (number 1) at 2 second
intervals.

If you need to see what load a particular task applies to your system,
use pidstat command. It prints activity of every
selected task or all tasks managed by Linux kernel if no task is
specified. You can also set the number of reports to be displayed and
the time interval between them.

For example, pidstat -C top 2 3
prints the load statistic for tasks whose command name includes the
string “top”. There will be three reports printed at two
second intervals.

The special shell variable $$, whose value is the
process ID of the shell, has been used.

The command lsof lists all the files currently open
when used without any parameters. There are often thousands of open
files, therefore, listing all of them is rarely useful. However, the
list of all files can be combined with search functions to generate
useful lists. For example, list all used character devices:

udevadm monitor listens to the kernel uevents and
events sent out by a udev rule and prints the device path (DEVPATH) of
the event to the console. This is a sequence of events while connecting
a USB memory stick:

Monitoring udev Events

Only root user is allowed to monitor udev events by running the
udevadm command.

The Linux audit framework is a complex auditing system that collects
detailed information about all security related events. These records
can be consequently analyzed to discover if, for example, a violation of
security policies occurred. For more information on audit, see
Part “The Linux Audit Framework” (↑Security Guide).

The command top, which stands for table of
processes, displays a list of processes that is refreshed
every two seconds. To terminate the program, press Q.
The parameter -n 1 terminates the program after a
single display of the process list. The following is an example output
of the command top -n 1:

hyptop provides a dynamic real-time view of a
System z hypervisor environment, using the kernel infrastructure via
debugfs. It works with either the z/VM or the LPAR hypervisor. Depending
on the available data it, for example, shows CPU and memory consumption
of active LPARs or z/VM guests. It provides a curses based user
interface similar to the top command.
hyptop provides two windows:

sys_list: Shows a list of systems that the
currently hypervisor is running

sys: Shows one system in more detail

You can run hyptop in interactive mode (default) or
in batch mode with the -b option. Help in the
interactive mode is available by pressing ? after
hyptop is started.

The iotop utility displays a table of I/O usage by
processes or threads.

iotop is not installed by default. You need to
install it manually with zypper in iotop as
root.

iotop displays columns for the I/O bandwidth read and
written by each process during the sampling period. It also displays the
percentage of time the process spent while swapping in and while waiting
on I/O. For each process, its I/O priority (class/level) is shown. In
addition, the total I/O bandwidth read and written during the sampling
period is displayed at the top of the interface.

Use the left and right arrows to change the sorting, R
to reverse the sorting order, O to toggle the
--only option, P to toggle the
--processes option, A to toggle the
--accumulated option, Q to quit or
I to change the priority of a thread or a process'
thread(s). Any other key will force a refresh.

Following is an example output of the command iotop
--only, while find and
emacs are running:

The kernel determines which processes require more CPU time than others
by the process' nice level, also called niceness. The higher the
“nice” level of a process is, the less CPU time it will
take from other processes. Nice levels range from -20 (the least
“nice” level) to 19. Negative values can only be set by
root.

Adjusting the niceness level is useful when running a non time-critical
process that lasts long and uses large amounts of CPU time, such as
compiling a kernel on a system that also performs other tasks. Making
such a process “nicer”, ensures that the other tasks, for
example a Web server, will have a higher priority.

Calling nice without any parameters prints the
current niceness:

tux@mercury:~> nice
0

Running nice command
increments the current nice level for the given command by 10. Using
nice -n levelcommand lets you specify a new
niceness relative to the current one.

To change the niceness of a running process, use renice
priority-p process
id, for example:

renice +5 3266

To renice all processes owned by a specific user, use the option
-u user. Process groups are
reniced by the option -g process group
id.

The options -b, -k,
-m, -g show the output in bytes, KB,
MB, or GB, respectively. The parameter -d delay ensures
that the display is refreshed every delay
seconds. For example, free -d 1.5 produces an update
every 1.5 seconds.

Use /proc/meminfo to get more detailed information
on memory usage than with free. Actually
free uses some of the data from this file. See an
example output from a 64-bit system below. Note that it slightly differs
on 32-bit systems due to different memory management):

Exactly determining how much memory a certain process is consuming is
not possible with standard tools like top or
ps. Use the smaps subsystem, introduced in Kernel
2.6.14, if you need exact data. It can be found at
/proc/pid/smaps and
shows you the number of clean and dirty memory pages the process with
the ID PID is using at that time. It
differentiates between shared and private memory, so you are able to see
how much memory the process is using without including memory shared
with other processes.

ifconfig is a powerful tool to set up and control
network interfaces. As well as this, you can use it to quickly view
basic statistics about one or all network interfaces present in the
system, such as whether the interface is up, the number of errors or
dropped packets, or packet collisions.

If you run ifconfig with no additional parameter, it
lists all active network interfaces. ifconfig
-a lists all (even inactive) network
interfaces, while ifconfig
net_interface lists statistics for the
specified interface only.

When displaying network connections or statistics, you can specify the
socket type to display: TCP (-t), UDP
(-u), or raw (-r). The
-p option shows the PID and name of the program to
which each socket belongs.

The following example lists all TCP connections and the programs using
these connections.

The iptraf utility is a menu based Local Area Network
(LAN) monitor. It generates network statistics, including TCP and UDP
counts, Ethernet load information, IP checksum errors and others.

iptraf is not installed by default, install it with
zypper in iptraf as root

If you enter the command without any option, it runs in an interactive
mode. You can navigate through graphical menus and choose the statistics
that you want iptraf to report. You can also specify
which network interface to examine.

The command iptraf understands several options and
can be run in a batch mode as well. The following example will collect
statistics for network interface eth0 (-i) for 1 minute
(-t). It will be run in the background
(-B) and the statistics will be written to the
iptraf.log file in your home directory
(-L).

Further information is available in the text file
/usr/src/linux/Documentation/filesystems/proc.txt
(this file is available when the package
kernel-source is installed). Find information
about processes currently running in the
/proc/NNN directories,
where NNN is the process ID (PID) of the
relevant process. Every process can find its own characteristics in
/proc/self/:

System control parameters are used to modify the Linux kernel parameters
at runtime. They can be checked with the sysctl
command, or by looking into /proc/sys/. A brief
description of some of /proc/sys/'s subdirectories
follows.

/proc/sys/vm/

Entries in this path relate to information about the virtual memory,
swapping, and caching.

/proc/sys/kernel/

Entries in this path represent information about the task scheduler,
system shared memory, and other kernel-related parameters.

/proc/sys/fs/

Entries in this path relate to used file handles, quotas, and other
file system-oriented parameters.

/proc/sys/net/

Entries in this path relate to information about network bridges, and
general network parameters (mainly the ipv4/
subdirectory).

The command lsusb lists all USB devices. With the
option -v, print a more detailed list. The detailed
information is read from the directory
/proc/bus/usb/. The following is the output of
lsusb with these USB devices attached: hub, memory
stick, hard disk and mouse.

Display the total size of all the files in a given directory and its
subdirectories with the command du. The parameter
-s suppresses the output of detailed information and
gives only a total for each argument. -h again
transforms the output into a human-readable form:

It can be useful to determine what processes or users are currently
accessing certain files. Suppose, for example, you want to unmount a
file system mounted at /mnt.
umount returns "device is busy." The command
fuser can then be used to determine what processes
are accessing the device:

Following termination of the less process, which was
running on another terminal, the file system can successfully be
unmounted. When used with -k option,
fuser will kill processes accessing the file as well.

There are a lot of data in the world around you, which can be easily
measured in time. For example, changes in the temperature, or the number
of data sent or received by your computer's network interface. RRDtool
can help you store and visualize such data in detailed and customizable
graphs.

RRDtool is available for most UNIX platforms and Linux distributions.
SUSE® Linux Enterprise Server ships RRDtool as well. Install it either with YaST or
by entering

zypper installrrdtool in the command line as
root.

There are Perl, Python, Ruby, or PHP bindings available for RRDtool, so
that you can write your own monitoring scripts with your preferred
scripting language.

RRDtool is a shortcut of Round Robin Database tool.
Round Robin is a method for manipulating with a
constant amount of data. It uses the principle of a circular buffer,
where there is no end nor beginning to the data row which is being read.
RRDtool uses Round Robin Databases to store and read its data.

As mentioned above, RRDtool is designed to work with data that change in
time. The ideal case is a sensor which repeatedly reads measured data
(like temperature, speed etc.) in constant periods of time, and then
exports them in a given format. Such data are perfectly ready for
RRDtool, and it is easy to process them and create the desired output.

Sometimes it is not possible to obtain the data automatically and
regularly. Their format needs to be pre-processed before it is supplied
to RRDtool, and often you need to manipulate RRDtool even manually.

The following is a simple example of basic RRDtool usage. It illustrates
all three important phases of the usual RRDtool workflow:
creating a database, updating
measured values, and viewing the output.

Suppose we want to collect and view information about the memory usage
in the Linux system as it changes in time. To make the example more
vivid, we measure the currently free memory for the period of 40 seconds
in 4-second intervals. During the measuring, the three hungry
applications that usually consume a lot of system memory have been
started and closed: the Firefox Web browser, the Evolution e-mail
client, and the Eclipse development framework.

RRDtool is very often used to measure and visualize network traffic. In
such case, Simple Network Management Protocol (SNMP) is used. This
protocol can query network devices for relevant values of their
internal counters. Exactly these values are to be stored with RRDtool.
For more information on SNMP, see
http://www.net-snmp.org/.

Our situation is different - we need to obtain the data manually. A
helper script free_mem.sh repetitively reads the
current state of free memory and writes it to the standard output.

The time interval is set to 4 seconds, and is implemented with the
sleep command.

RRDtool accepts time information in a special format - so called
Unix time. It is defined as the number of
seconds since the midnight of January 1, 1970 (UTC). For example,
1272907114 represents 2010-05-03 17:18:34.

This command creates a file called free_mem.rrd
for storing our measured values in a Round Robin type database.

The --start option specifies the time (in Unix time)
when the first value will be added to the database. In this example,
it is one less than the first time value of the
free_mem.sh output (1272974835).

The --step specifies the time interval in seconds
with which the measured data will be supplied to the database.

The DS:memory:GAUGE:600:U:U part introduces a new
data source for the database. It is called
memory, its type is gauge,
the maximum number between two updates is 600 seconds, and the
minimal and maximal value
in the measured range are unknown (U).

RRA:AVERAGE:0.5:1:24 creates Round Robin archive
(RRA) whose stored data are processed with the
consolidation functions (CF) that calculates the
average of data points. 3 arguments of the
consolidation function are appended to the end of the line .

If no error message is displayed, then
free_mem.rrd database is created in the current
directory:

After the database is created, you need to fill it with the measured
data. In Section 2.11.2.1, “Collecting Data”, we
already prepared the file free_mem_updates.log
which consists of rrdtool update commands. These
commands do the update of database values for us.

--start and --end limit the time
range within which the graph will be drawn.

--step specifies the time resolution (in seconds) of
the graph.

The DEF:... part is a data definition called
free_memory. Its data are read from the
free_mem.rrd database and its data source called
memory. The average value
points are calculated, because no others were defined in
Section 2.11.2.2, “Creating Database”.

The LINE... part specifies properties of the line
to be drawn into the graph. It is 2 pixels wide, its data come from
the free_memory definition, and its color is
red.

--vertical-label sets the label to be printed along
the y axis, and --title sets
the main label for the whole graph.

--zoom specifies the zoom factor for the graph. This
value must be greater than zero.

--x-grid specifies how to draw grid lines and their
labels into the graph. Our example places them every second, while
major grid lines are placed every 4 seconds. Labels are placed every
10 seconds under the major grid lines.

RRDtool is a very complex tool with a lot of sub-commands and command
line options. Some of them are easy to understand, but you have to
really study RRDtool to make it produce the results
you want and fine-tune them according to your liking.

Apart form RRDtool's man page (man 1 rrdtool) which
gives you only basic information, you should have a look at the
RRDtool home
page. There is a detailed
documentation
of the rrdtool command and all its sub-commands.
There are also several
tutorials
to help you understand the common RRDtool workflow.

If you are interested in monitoring network traffic, have a look at
MRTG. It stands for
Multi Router Traffic Grapher and can graph the activity of all sorts of
network devices. It can easily make use of RRDtool.

Nagios is a stable, scalable and extensible enterprise-class network and
system monitoring tool which allows administrators to monitor network and
host resources such as HTTP, SMTP, POP3, disk usage and processor load.
Originally Nagios was designed to run under Linux, but it can also be
used on several UNIX operating systems. This chapter covers the
installation and parts of the configuration of Nagios
(http://www.nagios.org/).

Both methods install the packages
nagios and
nagios-www. The later RPM
package contains a Web interface for Nagios which allows, for example, to
view the service status and the problem history. However, this is not
absolutely necessary.

Nagios is modular designed and, thus, uses external check plug-ins to
verify whether a service is available or not. It is recommended to
install the nagios-plugin RPM package that
contains ready-made check plug-ins. However, it is also possible to write
your own, custom check plug-ins.

In addition to those configuration files Nagios comes with very flexible
and highly customizable configuration files called Object
Definition configuration files. Those configuration files are
very important since they define the following objects:

Hosts

Services

Contacts

The flexibility lies in the fact that objects are easily enhanceable.
Imagine you are responsible for a host with only one service running.
However, you want to install another service on the same host machine
and you want to monitor that service as well. It is possible to add
another service object and assign it to the host object without huge
efforts.

Right after the installation, Nagios offers default templates for object
definition configuration files. They can be found at
/etc/nagios/objects. In the following see a
description on how hosts, services and contacts are added:

The host_name option defines a name to identify
the host that has to be monitored. address is
the IP address of this host. The use statement
tells Nagios to inherit other configuration values from the generic-host
template. check_period defines whether the
machine has to be monitored 24x7.
check_interval makes Nagios checking the
service every 5 minutes and retry_interval
tells Nagios to schedule host check retries at 1 minute intervals.
Nagios tries to execute the checks multiple times when they do not pass.
You can define how many attempts Nagios should do with the
max_check_attempts directive. All configuration
flags beginning with notification handle how
Nagios should behave when a failure of a monitored service occurs. In
the host definition above, Nagios notifies the administrators only on
working hours. However, this can be adjusted with
notification_period. According to
notification_interval notifications will be
resend every two hours. notification_options
contains four different flags: d, u, r and
n. They control in which state Nagios should
notify the administrator. d stands for a
down state, u for
unreachable and r for
recoveries. n does not send
any notifications anymore.

The first configuration directive use tells
Nagios to inherit from the generic-service
template. host_name is the name that assigns
the service to the host object. The host itself is defined in the host
object definition. A description can be set with
service_description. In the example above the
description is just PING. Within the
contact_groups option it is possible to refer
to a group of people who will be contacted on a failure of the service.
This group and its members are later defined in a contact group object
definition. check_command sets the program that
checks whether the service is available, or not.

The example listing above shows the direct
contact definition and its proper
contactgroup. The
contact definition contains the e-mail address
and the name of the person who is contacted on a failure of a service.
Usually this is the responsible administrator.
use inherits configuration values from the
generic-contact definition.

If you need to monitor a different remote service, it is possible to
adjust check_command in step
Step 5. A full list of all available check
programs can be obtained by executing ls
/usr/lib/nagios/plugins/check_*

Make sure that you have defined all necessary objects correctly. Be
careful with the spelling.

(Return code of 127 is out of bounds - plugin may be missing)

Make sure that you have installed
nagios-plugins.

E-mail notification does not work

Make sure that you have installed and configured a mail server like
postfix or exim
correctly. You can verify if your mail server works with echo
"Mail Server Test!" | mail foo@bar.com which sends an e-mail
to foo@bar.com. If this e-mail arrives, your mail server is working
correctly. Otherwise, check the log files of the mail server.

System log file analysis is one of the most important tasks when analyzing
the system. In fact, looking at the system log files should be the first
thing to do when maintaining or troubleshooting a system. SUSE Linux Enterprise Server
automatically logs almost everything that happens on the system in detail.
Normally, system log files are written in plain text and therefore, can be
easily read using an editor or pager. They are also parsable by scripts,
allowing you to easily filter their content.

System log files are always located under the
/var/log directory. The following list presents an
overview of all system log files from SUSE Linux Enterprise Server present after a
default installation. Depending on your installation scope,
/var/log also contains log files from other services
and applications not listed here. Some files and directories described
below are “placeholders” and are only used, when the
corresponding application is installed. Most log files are only visible
for the user root.

acpid

Log of the advanced configuration and power interface event daemon
(acpid), a daemon to notify
user-space programs of ACPI events.
acpid will log all of
its activities, as well as the STDOUT and
STDERR of any actions to syslog.

apparmor

AppArmor log files. See Part “Confining Privileges with AppArmor” (↑Security Guide) for details of AppArmor.

audit

Logs from the audit framework. See
Part “The Linux Audit Framework” (↑Security Guide) for details.

boot.msg

Log of the system init process—this file contains all boot
messages from the Kernel, the boot scripts and the services started
during the boot sequence.

Check this file to find out whether your hardware has been correctly
initialized or all services have been started successfully.

boot.omsg

Log of the system shutdown process - this file contains all messages
issued on the last shutdown or reboot.

ConsoleKit/*

Logs of the ConsoleKit daemon
(daemon for tracking what users are logged in and how they interact
with the computer).

cups/

Access and error logs of the Common UNIX Printing System
(cups).

faillog

Database file that contains all login failures. Use the
faillog command to view. See man 8
faillog for more information.

firewall

Firewall logs.

gdm/*

Log files from the GNOME display manager.

krb5

Log files from the Kerberos network authentication system.

lastlog

The lastlog file is a database which contains info on the last login
of each user. Use the command lastlog to view. See
man 8 lastlog for more information.

localmessages

Log messages of some boot scripts, for example the log of the DHCP
client.

mail*

Mail server (postfix,
sendmail) logs.

messages

This is the default place where all Kernel and system log messages go
and should be the first place (along with
/var/log/warn) to look at in case of problems.

NetworkManager

NetworkManager log files

news/*

Log messages from a news server.

ntp

Logs from the Network Time Protocol daemon
(ntpd).

pk_backend_zypp

PackageKit (with libzypp
backend) log files.

puppet/*

Log files from the data center automation tool puppet.

samba/*

Log files from samba, the Windows SMB/CIFS file server.

SaX.log

Logs from SaX2, the SUSE advanced X11 configuration tool.

scpm

Logs from the system configuration profile management
(scpm).

warn

Log of all system warnings and errors. This should be the first place
(along with /var/log/messages) to look at in case
of problems.

wtmp

Database of all login/logout activities, runlevel changes and remote
connections. Use the command last to view. See
man 1 last for more information.

xinetd.log

Log files from the extended Internet services daemon
(xinetd).

Xorg.0.log

X startup log file. Refer to this in case you have problems starting
X. Copies from previous X starts are numbered
Xorg.?.log.

YaST2/*

All YaST log files.

zypp/*

libzypp log files. Refer to
these files for the package installation history.

To view log files, you can use your favorite text editor. There is also a
simple YaST module for viewing /var/log/messages,
available in the YaST Control Center under Miscellaneous+System Log.

For viewing log files in a text console, use the commands
less or more. Use
head and tail to view the beginning
or end of a log file. To view entries appended to a log file in real-time
use tail -f. For information about
how to use these tools, see their man pages.

To search for strings or regular expressions in log files use
grep. awk is useful for parsing and
rewriting log files.

Log files under /var/log grow on a daily basis and
quickly become very big. logrotate is a tool for large
amounts of log files and helps you to manage these files and to control
their growth. It allows automatic rotation, removal, compression, and
mailing of log files. Log files can be handled periodically (daily,
weekly, or monthly) or when exceeding a particular size.

logrotate is usually run as a daily cron job. It does
not modify any log files more than once a day unless the log is to be
modified because of its size, because logrotate is
being run multiple times a day, or the --force option is
used.

The main configuration file of logrotate is
/etc/logrotate.conf. System packages as well as
programs that produce log files (for example,
apache2) put their own
configuration files in the /etc/logrotate.d/
directory. The content of /etc/logrotate.d/ is
included via /etc/logrotate.conf.

logwatch is a customizable, pluggable log-monitoring
script. It parses system logs, extracts the important information and
presents them in a human readable manner. To use
logwatch, install the
logwatch package.

logwatch can either be used at the command-line to
generate on-the-fly reports, or via cron to regularly create custom
reports. Reports can either be printed on the screen, saved to a file, or
be mailed to a specified address. The latter is especially useful when
automatically generating reports via cron.

The command-line syntax is easy. You basically tell logwatch
for which service, time span and to which detail level to
generate a report:

The --range option has got a complex syntax—see
logwatch --range help for details. A
list of all services that can be queried is available with the following
command:

ls /usr/share/logwatch/default.conf/services/ | sed 's/\.conf//g'

logwatch can be customized to great detail. However,
the default configuration should be sufficient in most cases. The default
configuration files are located under
/usr/share/logwatch/default.conf/. Never change them
because they would get overwritten again with the next update. Rather
place custom configuration in /etc/logwatch/conf/
(you may use the default configuration file as a template, though). A
detailed HOWTO on customizing logwatch is available at
/usr/share/doc/packages/logwatch/HOWTO-Customize-LogWatch.
The following config files exist:

logwatch.conf

The main configuration file. The default version is extensively
commented. Each configuration option can be overwritten on the command
line.

ignore.conf

Filter for all lines that should globally be ignored by
logwatch.

services/*.conf

The service directory holds configuration files for each service you
can generate a report for.

logger is a tool for making entries in the system log.
It provides a shell command interface to the syslog(3) system log module.
For example, the following line outputs its message in
/var/log/messages:

logger -t Test "This messages comes from $USER"

Depending on the current user and hostname,
/var/log/messages contains a line similar to this:

SystemTap provides a command line interface and a scripting language to
examine the activities of a running Linux system, particularly the kernel,
in fine detail. SystemTap scripts are written in the SystemTap scripting
language, are then compiled to C-code kernel modules and inserted into the
kernel. The scripts can be designed to extract, filter and summarize data,
thus allowing the diagnosis of complex performance problems or functional
problems. SystemTap provides information similar to the output of tools like
netstat, ps, top,
and iostat. However, more filtering and analysis
options can be used for the collected information.

Each time you run a SystemTap script, a SystemTap session is started. A number
of passes are done on the script before it is allowed to run, at which
point the script is compiled into a kernel module and loaded. In case the
script has already been executed before and no changes regarding any
components have occurred (for example, regarding compiler version, kernel
version, library path, script contents), SystemTap does not compile the
script again, but uses the *.c and
*.ko data stored in the SystemTap cache
(~/.systemtap). The module is unloaded when the tap
has finished running. For an example, see the test run in
Section 5.2, “Installation and Setup” and the respective
explanation.

SystemTap usage is based on SystemTap scripts (*.stp).
They tell SystemTap which type of information to collect, and what to do
once that information is collected. The scripts are written in the
SystemTap scripting language that is similar to AWK and C. For the language
definition, see
http://sourceware.org/systemtap/langref/.

The essential idea behind a SystemTap script is to name
events, and to give them handlers.
When SystemTap runs the script, it monitors for certain events. When an
event occurs, the Linux kernel runs the handler as a sub-routine, then
resumes. Thus, events serve as the triggers for handlers to run.
Handlers can record specified data and print it in a certain manner.

The SystemTap language only uses a few data types (integers, strings, and
associative arrays of these), and full control structures (blocks,
conditionals, loops, functions). It has a lightweight punctuation
(semicolons are optional) and does not need detailed declarations (types
are inferred and checked automatically).

For more information about SystemTap scripts and their syntax, refer to
Section 5.3, “Script Syntax” and to the
stapprobes and stapfuncs man
pages, that are available with the
systemtap-docs package.

Tapsets are a library of pre-written probes and functions that can be
used in SystemTap scripts. When a user runs a SystemTap script, SystemTap checks
the script's probe events and handlers against the tapset library.
SystemTap then loads the corresponding probes and functions before
translating the script to C. Like SystemTap scripts themselves, tapsets use
the filename extension *.stp.

However, unlike SystemTap scripts, tapsets are not meant for direct
execution—they constitute the library from which other scripts can
pull definitions. Thus, the tapset library is an abstraction layer
designed to make it easier for users to define events and functions.
Tapsets provide useful aliases for functions that users may want to
specify as an event (knowing the proper alias is mostly easier than
remembering specific kernel functions that might vary between kernel
versions).

The main commands associated with SystemTap are stap and
staprun. To execute them, you either need root
privileges or must be a member of the
stapdev or
stapusr group.

stap

SystemTap front-end. Runs a SystemTap script (either from file, or from
standard input). It translates the script into C code, compiles it,
and loads the resulting kernel module into a running Linux kernel.
Then, the requested system trace or probe functions are performed.

staprun

SystemTap back-end. Loads and unloads kernel modules produced by the
SystemTap front-end.

For a list of options for each command, use --help. For
details, refer to the stap and the
staprun man pages.

To avoid giving root access to users just for running SystemTap, you
can make use of the following SystemTap groups. They are not available by
default on SUSE Linux Enterprise, but you can create the groups and modify the access
rights accordingly.

stapdev

Members of this group can run SystemTap scripts with
stap, or run SystemTap instrumentation modules with
staprun. As running stap
involves compiling scripts into kernel modules and loading them into
the kernel, members of this group still have effective root
access.

stapusr

Members of this group are only allowed to run SystemTap instrumentation
modules with staprun. In addition, they can only
run those modules from
/lib/modules/kernel_version/systemtap/.
This directory must be owned by root and must only be writable
for the root user.

As SystemTap needs information about the kernel, some kernel-related
packages must be installed in addition to the SystemTap packages. For each
kernel you want to probe with SystemTap, you need to install a set of the
following packages that exactly matches the kernel version and flavor
(indicated by * in the overview below).

Repository for Packages with Debugging Information

If you subscribed your system for online updates, you can find
“debuginfo” packages in the
*-Debuginfo-Updates online installation repository
relevant for SUSE Linux Enterprise Server11 SP3. Use YaST to enable the
repository.

For the classic SystemTap setup, install the following packages (using
either YaST or zypper).

systemtap

systemtap-server

systemtap-docs (optional)

kernel-*-base

kernel-*-debuginfo

kernel-*-devel

kernel-source-*

gcc

To get access to the man pages and to a helpful collection of example
SystemTap scripts for various purposes, additionally install the
systemtap-docs package.

To check if all packages are correctly installed on the machine and if
SystemTap is ready to use, execute the following command as root.

stap -v -e 'probe vfs.read {printf("read performed\n"); exit()}'

It probes the currently used kernel by running a script and returning an
output. If the output is similar to the following, SystemTap is successfully
deployed and ready to use:

Checks the script against the existing tapset library in
/usr/share/systemtap/tapset/ for any tapsets used.
Tapsets are scripts that form a library of pre-written probes and
functions that can be used in SystemTap scripts.

Examines the script for its components.

Translates the script to C. Runs the system C compiler to create a
kernel module from it. Both the resulting C code
(*.c) and the kernel module
(*.ko) are stored in the SystemTap cache,
~/.systemtap.

Loads the module and enables all the probes (events and handlers) in
the script by hooking into the kernel. The event being probed is a
Virtual File System (VFS) read. As the event occurs on any processor, a
valid handler is executed (prints the text read
performed) and closed with no errors.

After the SystemTap session is terminated, the probes are disabled, and
the kernel module is unloaded.

In case any error messages appear during the test, check the output for
hints about any missing packages and make sure they are installed
correctly. Rebooting and loading the appropriate kernel may also be
needed.

Series of script language statements that specify the work to be done
whenever a certain event occurs. This normally includes extracting
data from the event context, storing them into internal variables, or
printing results.

An event and its corresponding handler is collectively called a
probe. SystemTap events are also called probe
points. A probe's handler is also referred to as probe
body.

Comments can be inserted anywhere in the SystemTap script in various styles:
using either #, /* */, or
// as marker.

String to be printed by the printf function,
followed by a line break (/n).

Second function defined in the handler: the exit()
function. Note that the SystemTap script will continue to run until the
exit() function executes. If you want to stop the
execution of the script before, stop it manually by pressing
Ctrl+C.

End of the handler definition, indicated by }.

The event begin
(the start of the SystemTap session) triggers the handler enclosed in
{ }, in this case the printf
function
which prints hello world followed by a new line
,
then exits.

If your statement block holds several statements, SystemTap executes these
statements in sequence—you do not need to insert special
separators or terminators between multiple statements. A statement block
can also be nested within another statement blocks. Generally, statement
blocks in SystemTap scripts use the same syntax and semantics as in the C
programming language.

The general event syntax is a dotted-symbol sequence. This allows a
breakdown of the event namespace into parts. Each component identifier
may be parametrized by a string or number literal, with a syntax like a
function call. A component may include a * character,
to expand to other matching probe points. A probe point may be followed
by a ? character, to indicate that it is optional,
and that no error should result if it fails to expand.
Alternately, a probe point may be followed by a !
character to indicate that it is both optional and sufficient.

SystemTap supports multiple events per probe—they need to be
separated by a comma (,). If multiple events are
specified in a single probe, SystemTap will execute the handler when any of
the specified events occur.

In general, events can be classified into the following categories:

Synchronous events: Occur when any process executes an instruction at
a particular location in kernel code. This gives other events a
reference point (instruction address) from which more contextual data
may be available.

An example for a synchronous event is
vfs.file_operation: The
entry to the file_operation event for
Virtual File System (VFS). For example, in
Section 5.2, “Installation and Setup”, read
is the file_operation event used for VFS.

Asynchronous events: Not tied to a particular instruction or location
in code. This family of probe points consists mainly of counters,
timers, and similar constructs.

Examples for asynchronous events are: begin (start
of a SystemTap session—as soon as a SystemTap script is run,
end (end of a SystemTap session), or timer events.
Timer events specify a handler to be executed periodically, like
example
timer.s(seconds), or
timer.ms(milliseconds).

When used in conjunction with other probes that collect information,
timer events allow you to print out periodic updates and see how that
information changes over time.

For example, the following probe would print the text “hello
world” every 4 seconds:

probe timer.s(4)
{
printf("hello world\n")
}

For detailed information about supported events, refer to the
stapprobes man page. The See
Also section of the man page also contains links to other
man pages that discuss supported events for specific subsystems and
components.

If you need the same set of statements in multiple probes, you can
place them in a function for easy reuse. Functions are defined by the
keyword function followed by a name. They take any
number of string or numeric arguments (by value) and may return a
single string or number.

The statements in function_name are executed
when the probe for event executes. The
arguments are optional values passed into
the function.

Functions can be defined anywhere in the script. They may take any

One of the functions needed very often was already introduced in
Example 5.1, “Simple SystemTap Script”: the
printf function for printing data in a formatted
way. When using the printf function, you can specify
how arguments should be printed by using a format string. The format
string is included in quotation marks and can contain further format
specifiers, introduced by a % character.

Which format strings to use depends on your list of arguments. Format
strings can have multiple format specifiers—each matching a
corresponding argument. Multiple arguments can be separated by a comma.

Useful function for organizing print results. It (internally) stores
an indentation counter for each thread (tid()).
The function takes one argument, an indentation delta, indicating
how many spaces to add or remove from the thread's indentation
counter. It returns a string with some generic trace data along with
an appropriate number of indentation spaces. The generic data
returned includes a time stamp (number of microseconds since the
initial indentation for the thread), a process name, and the thread
ID itself. This allows you to identify what functions were called,
who called them, and how long they took.

Call entries and exits often do not immediately precede each other
(otherwise it would be easy to match them). In between a first call
entry and its exit, usually a number of other call entries and exits
are made. The indentation counter helps you match an entry with its
corresponding exit as it indents the next function call in case it
is not the exit of the previous one. For an
example SystemTap script using thread_indent() and
the respective output, refer to the SystemTap
Tutorial:
http://sourceware.org/systemtap/tutorial/Tracing.html#fig:socket-trace.

For more information about supported SystemTap functions, refer to the
stapfuncs man page.

Apart from functions, you can use several other common constructs in
SystemTap handlers, including variables, conditional statements (like
if/else, while
loops, for loops, arrays or command line arguments.

Variables may be defined anywhere in the script. To define one, simply
choose a name and assign a value from a function or expression to it:

foo = gettimeofday( )

Then you can use the variable in an expression. From the type of
values assigned to the variable, SystemTap automatically infers the type
of each identifier (string or number). Any inconsistencies will be
reported as errors. In the example above, foo would
automatically be classified as a number and could be printed via
printf() with the integer format specifier
(%d).

However, by default, variables are local to the probe they are used
in: They are initialized, used and disposed of at each handler
evocation. To share variables between probes, declare them global
anywhere in the script. To do so, use the global
keyword outside of the probes:

This example script computes the CONFIG_HZ setting of the kernel by
using timers that count jiffies and milliseconds, then computing
accordingly. (A jiffy is the duration of one tick of the system timer
interrupt. It is not an absolute time interval unit, since its
duration depends on the clock interrupt frequency of the particular
hardware platform). With the global statement it
is possible to use the variables count_jiffies and
count_ms also in the probe
timer.ms(12345). With ++ the
value of a variable is incremented by 1.

There are a number of conditional statements that you can use in
SystemTap scripts. The following are probably most common:

If/Else Statements

They are expressed in the following format:

if (condition)statement1
elsestatement2

The if statement compares an integer-valued
expression to zero. If the condition expression
is non-zero, the first statement
is executed. If the condition expression is zero, the second
statement
is executed. The else clause
(
and
)
is optional. Both
and
can also be statement blocks.

While Loops

They are expressed in the following format:

while (condition)statement

As long as condition is non-zero, the statement
is executed.
can also be a statement block. It must change a value so
condition will eventually be zero.

For Loops

They are basically a shortcut for while loops
and are expressed in the following format:

for (initialization; conditional; increment) statement

The expression specified in
is used to initialize a counter for the number of loop iterations
and is executed before execution of the loop starts. The execution
of the loop continues until the loop condition
is false. (This expression is checked at the beginning of each loop
iteration). The expression specified in
is used to increment the loop counter. It is executed at the end of
each loop iteration.

This SystemTap script monitors the incoming TCP connections and helps to
identify unauthorized or unwanted network access requests in real time.
It shows the following information for each new incoming TCP connection
accepted by the computer:

For debugging user-space applications (like DTrace can do), SUSE Linux Enterprise Server11 SP3 supports user-space probing with SystemTap: Custom probe points can
be inserted in any user-space application. Thus, SystemTap lets you use both Kernel- and user-space
probes to debug the behavior of the whole system.

To get the required utrace infrastructure and the uprobes Kernel module for user-space
probing, you need to install the kernel-trace package
in addition to the packages listed in Section 5.2, “Installation and Setup”.

Basically, utrace implements a framework for controlling user-space tasks. It provides an
interface that can be used by various tracing “engines”, implemented as loadable
Kernel modules. The engines register callback functions for specific events, then attach to
whichever thread they wish to trace. As the callbacks are made from “safe” places in
the Kernel, this allows for great leeway in the kinds of processing the functions can do. Various
events can be watched via utrace, for example, system call entry and exit, fork(), signals being
sent to the task, etc. More details about the utrace infrastructure are available at http://sourceware.org/systemtap/wiki/utrace.

SystemTap includes support for probing the entry into and return from a function in
user-space processes, probing predefined markers in user-space code, and monitoring user-process
events.

To check if the currently running Kernel provides the needed utrace support, use the
following command:

Huge collection of useful information about SystemTap, ranging from
detailed user and developer documentation to reviews and comparisons
with other tools, or Frequently Asked Questions and tips. Also
contains collections of SystemTap scripts, examples and usage stories and
lists recent talks and papers about SystemTap.

Features a SystemTap Tutorial, a SystemTap
Beginner's Guide, a Tapset Developer's
Guide, and a SystemTap Language
Reference in PDF and HTML format. Also lists the relevant
man pages.

You can also find the SystemTap language reference and SystemTap tutorial in
your installed system under
/usr/share/doc/packages/systemtap. Example SystemTap
scripts are available from the example subdirectory.

Kernel probes are a set of tools to collect Linux kernel debugging and
performance information. Developers and system administrators usually use
them either to debug the kernel, or to find system performance
bottlenecks. The reported data can then be used to tune the system for
better performance.

You can insert these probes into any kernel routine, and specify a handler
to be invoked after a particular break-point is hit. The main advantage of
kernel probes is that you no longer need to rebuild the kernel and reboot
the system after you make changes in a probe.

To use kernel probes, you typically need to write or obtain a specific
kernel module. Such module includes both the init and
the exit function. The init function (such as
register_kprobe()) registers one or more probes,
while the exit function unregisters them. The registration function
defines where the probe will be inserted and
which handler will be called after the probe is hit.
To register or unregister a group of probes at one time, you can use
relevant
register_<probe_type>probes()
or
unregister_<probe_type>probes()
functions.

Debugging and status messages are typically reported with the
printk kernel routine.
printk is a kernel-space equivalent of a
user-space printf routine. For more information
on printk, see
Logging
kernel messages. Normally, you can view these messages by
inspecting /var/log/messages or
/var/log/syslog. For more information on log files,
see Chapter 4, Analyzing and Managing System Log Files.

There are three types of kernel probes: kprobes,
jprobes, and kretprobes.
Kretprobes are sometimes referred to as return
probes. You can find vivid source code examples of all three
type of kernel probes in the
/usr/src/linux/samples/kprobes/ directory (package
kernel-source).

Kprobe can be attached to any instruction in the Linux kernel. When it
is registered, it inserts a break-point at the first bytes of the probed
instruction. When the processor hits this break-point, the processor
registers are saved, and the processing passes to kprobes. First, a
pre-handler is executed, then the probed
instruction is stepped, and, finally a post-handler
is executed. The control is then passed to the instruction following the
probe point.

Jprobe is implemented through the kprobe mechanism. It is inserted on a
function's entry point and allows direct access to the arguments of the
function which is being probed. Its handler routine must have the same
argument list and return value as the probed function. It also has to
end by calling the jprobe_return() function.

When jprobe is hit, the processor registers are saved, and the
instruction pointer is directed to the jprobe handler routine. The
control
then passes to the handler with the same register contents as the
function being probed. Finally, the handler calls the
jprobe_return() function, and switches the
control back to the control function.

In general, you can insert multiple probes on one function. Jprobe is,
however, limited to only one instance per function.

Return probes are also implemented through kprobes. When the
register_kretprobe() function is called, a
kprobe is attached to the entry of the probed function.
After hitting the probe, the Kernel probes mechanism saves the probed
function return address and calls a user-defined return handler. The
control is then passed back to the probed function.

Before you call register_kretprobe(), you need
to set a maxactive argument, which specifies
how many instances of the function can be probed at the same time. If
set too low, you will miss a certain number of probes.

Kprobe's programming interface consists of functions, which are used to
register and unregister all used kernel probes, and associated probe
handlers. For a more detailed description of these functions and their
arguments, see the information sources in
Section 6.5, “For More Information”.

register_kprobe()

Inserts a break-point on a specified address. When the break-point is
hit, the pre_handler and
post_handler are called.

register_jprobe()

Inserts a break-point in the specified address. The address has to be
the address of the first instruction of the probed function. When the
break-point is hit, the specified handler is run. The handler should
have the same argument list and return type as the probed.

register_kretprobe()

Inserts a return probe for the specified function. When the probed
function returns, a specified handler is run. This function returns 0
on success, or a negative error number on failure.

unregister_kprobe(), unregister_jprobe(), unregister_kretprobe()

Removes the specified probe. You can use it any time after the probe
has been registered.

The first column lists the address in the kernel where the probe is
inserted. The second column prints the type of the probe:
k for kprobe, j for jprobe, and
r for return probe. The third column specifies the
symbol, offset and optional module name of the probe. The following
optional columns include the status information of the probe. If the
probe is inserted on a virtual address which is not valid anymore, it is
marked with [GONE]. If the probe is temporarily
disabled, it is marked with [DISABLED].

The /sys/kernel/debug/kprobes/enabled file
represents a switch with which you can globally and forcibly turn on or
off all the registered kernel probes. To turn them off, simply enter

echo "0" > /sys/kernel/debug/kprobes/enabled

on the command line as root. To turn them on again, enter

echo "1" > /sys/kernel/debug/kprobes/enabled

Note that this way you do not change the status of the probes. If a
probe is temporarily disabled, it will not be enabled automatically but
will remain in the [DISABLED] state after entering
the latter command.

Perfmon2 is a standardized, generic interface to access the performance
monitoring unit (PMU) of a processor. It is portable across all PMU
models and architectures, supports system-wide and per-thread monitoring,
counting and sampling.

Performance monitoring is “the action of collecting information
related to how an application or system performs”. The
information can be obtained from the code or the CPU/chipset.

Modern processors contain a performance monitoring unit (PMU). The
design and functionality of a PMU is CPU specific: for example, the
number of registers, counters and features supported will vary by CPU
implementation.

The Perfmon interface is designed to be generic, flexible and
extensible. It can monitor at the program (thread) or system levels. In
either mode, it is possible to count or sample your profile information.
This uniformity makes it easier to write portable tools.
Figure 7.1, “Architecture of perfmon2” gives an overview.

Each PMU model consists of a set of registers: the performance monitor
configuration (PMC) and the performance monitor data (PMD). Only PMCs
are writeable, but both can be read. These registers store configuration
information and data.

Perfmon2 supports two modes where you can run your profiling: sampling
or counting.

Sampling is usually expressed by an interval of
time (time-based) or an occurance of a definied number of events
(event-based). Perfmon indirectly supports time-based sampling by using
an event-based sample with constant correlation to time (for example,
unhalted_reference_cycles.)

In contrast, Counting is expressed in terms of a
number of occurances of an event.

Both methods store their information into a sample.
This sample contains information about, for example, where a thread was
or instruction pointers.

The following example demonstrates the counting of the
CPU_OP_CYCLES event and the sampling of this event,
generating a sample per 100000 occurances of the event:

Use the --system-wide option to enable monitoring all
processes that execute on a specific CPU or sets of CPUs. You do not
have to be root to do so; per default, user level is turned on for
all events (option -u).

It is possible that one system-wide session can run concurrently with
other system-wide sessions as long as they do not monitor the same set
of CPUs. However, you cannot run a system-wide session together with
any per-thread session.

The following examples are taken from a Itanium IA64 Montecito
processor. To execute a system-wide session, perform the following
procedure:

Perfmon can collect statistics which are exported through the debug
interface. The metrics consists of mostly aggregated counts and
durations.

Access the data through mounting the debug file system as root
under /sys/kernel/debug

The data is located under /sys/kernel/debug/perfmon/
and organized per CPU. Each CPU contains a set of metrics, accessible as
ASCII file. The following data is taken from the
/usr/src/linux/Documentation/perfmon2-debugfs.txt:

OProfile is a profiler for dynamic program analysis. It investigates the
behaviour of a running program and gathers information. This information
can be viewed and gives hints for further optimizations.

It is not necessary to recompile or use wrapper libraries in order to use
OProfile. Not even a Kernel patch is needed. Usually, when you profile an
application, a small overhead is expected, depending on work load and
sampling frequency.

OProfile consists of a Kernel driver and a daemon for collecting data. It
makes use of the hardware performance counters provided on Intel, AMD,
and other processors. OProfile is capable of profiling all code including
the Kernel, Kernel modules, Kernel interrupt handlers, system shared
libraries, and other applications.

Modern processors support profiling through the hardware by performance
counters. Depending on the processor, there can be many counters and each
of these can be programmed with an event to count. Each counter has a
value which determines how often a sample is taken. The lower the value,
the more often it is used.

During the post-processing step, all information is collected and
instruction addresses are mapped to a function name.

It is possible with OProfile to profile both Kernel and applications. When
profiling the Kernel, tell OProfile where to find the
vmlinuz* file. Use the --vmlinux
option and point it to vmlinuz* (usually in
/boot). If you need to profile Kernel modules,
OProfile does this by default. However, make sure you read
http://oprofile.sourceforge.net/doc/kernel-profiling.html.

Applications usually do not need to profile the Kernel, so better use the
--no-vmlinux option to reduce the amount of information.

The GUI for OProfile can be started as root with
oprof_start, see
Figure 8.1, “GUI for OProfile”. Select your events and change the
counter, if necessary. Every green line is added to the list of checked
events. Hover the mouse over the line to see a help text in the status
line below. Use the Configuration tab to set the
buffer and CPU size, the verbose option and others. Click on
Start to execute OProfile.

Before generating a report, make sure OProfile has dumped your data to the
/var/lib/oprofile/samples directory using the
command opcontrol--dump. A report
can be generated with the commands opreport or
opannotate.

Calling oreport without any options gives a complete
summary. With an executable as an argument, retrieve profile data only
from this executable. If you analyze applications written in C++, use the
--demangle smart option.

The opannotate generates output with annotations from
source code. Run it with the following options:

The option --base-dir contains a comma separated list of
paths which is stripped from debug source files. This paths were searched
prior than looking in --search-dirs. The
--search-dirs option is also a comma separated list of
directories to search for source files.

Tuning the system is not only about optimizing the kernel or getting the
most out of your application, it begins with setting up a lean and fast
system. The way you set up your partitions and file systems can influence
the server's speed. The number of active services and the way routine
tasks are scheduled also affects performance.

A carefully planned installation ensures that the system is basically set
up exactly as you need it for the given purpose. It also saves
considerable time when fine tuning the system. All changes suggested in
this section can be made in the Installation Settings
step during the installation. See Section “Installation Settings” (Chapter 6, Installation with YaST, ↑Deployment Guide)
for details.

Depending on the server's range of applications and the hardware layout,
the partitioning scheme can influence the machine's performance
(although to a lesser extend only). It is beyond the scope of this
manual to suggest different partition schemes for particular workloads,
however, the following rules will positively affect performance. Of
course they do not apply when using an external storage system.

Make sure there always is some free space available on the disk, since
a full disk has got inferior performance

Actually, the installation scope has no direct influence on the
machine's performance, but a carefully chosen scope of packages
nevertheless has got advantages. It is recommended to install the
minimum of packages needed to run the server. A system with a minimum
set of packages is easier to maintain and has got less potential
security issues. Furthermore, a tailor made installation scope also
ensures no unnecessary services are started by default.

SUSE Linux Enterprise Server lets you customize the installation scope on the
Installation Summary screen. By default, you can select or remove
pre-configured patterns for specific tasks, but it is also possible to
start the YaST Software Manager for a fine-grained package based
selection.

One or more of the following default patterns may not be needed in all
cases:

GNOME Desktop Environment

A server seldomly needs a full-blown desktop environment. In case a
graphical environment is needed, a more economical solution such as
as icewm or fvwm may also be sufficient.

X Window System

When solely administrating the server and its applications via
command line, consider to not install this pattern. However, keep in
mind that it is needed to run GUI applications from a remote machine.
If your application is managed by a GUI or if you prefer the GUI
version of YaST, keep this pattern.

A running X Window system eats up many resources and is seldomly needed
on a server. It is strongly recommended to start the system in runlevel
3 (Full multiuser with network, no X). You will still be able to start
graphical applications from remote or use the startx
command to start a local graphical desktop.

The default installation starts a number of services (the number varies
with the installation scope). Since each service consumes resources, it
is recommended to disable the ones not needed. Run YaST+System+System Services (Runlevel)+Expert Mode to start the services management module. When using the
graphical version of YaST you can click on the column headlines to sort
the service list. Use this to get an overview of which services are
currently running or which services are started in the server's default
runlevel. Click a service to see its description. Use the
Start/Stop/Refresh drop-down menu to disable the
service for the running session. To permanently disable it, use the
Set/Reset drop-down menu.

The following list shows services that are started by default after the
installation of SUSE Linux Enterprise Server. Check which of the components you need,
and disable the others:

alsasound

Loads the Advanced Linux Sound System.

auditd

A daemon for the audit system (see Part “The Linux Audit Framework” (↑Security Guide) for
details). Disable if you do not use Audit.

Hard disks are the slowest components in a computer system and therefore
often the cause for a bottleneck. Using the file system that best suits
your workload helps to improve performance. Using special mount options
or prioritizing a process' I/O priority are further means to speed up the
system.

SUSE Linux Enterprise Server ships with a number of different file systems, including
BrtFS, Ext3, Ext2, ReiserFS, and XFS. Each file system has its own
advantages and disadvantages. Please refer to
Chapter 1, Overview of File Systems in Linux (↑Storage Administration Guide) for detailed information.

NFS (Version 3) tuning is covered in detail in the NFS Howto at
http://nfs.sourceforge.net/nfs-howto/. The first
thing to experiment with when mounting NFS shares is increasing the
read write blocksize to 32768 by using the mount
options wsize and rsize.

Whenever a file is read on a Linux file system, its access time (atime)
is updated. As a result, each read-only file access in fact causes a
write operation. On a journaling file system two write operations
are triggered since the journal will be updated, too. It is recommended
to turn this feature off when you do not need to keep track of access
times. This is possibly true for file and Web servers as well as for a
network storage.

To turn off access time updates, mount the file system with the
noatime option. To do so, either edit
/etc/fstab directly, or use the Fstab
Options dialog when editing or adding a partition with the
YaST Partitioner.

The ionice command lets you prioritize disk access
for single processes. This enables you to give less I/O priority to non
time-critical background processes with heavy disk access, such as
backup jobs. On the other hand ionice lets you raise
I/O priority for a specific process to make sure this process has always
immediate access to the disk. You may set the following three scheduling
classes:

Idle

A process from the idle scheduling class is only granted disk access
when no other process has asked for disk I/O.

Best effort

The default scheduling class used for any process that has not asked
for a specific I/O priority. Priority within this class can be
adjusted to a level from 0 to 7
(with 0 being the highest priority). Programs
running at the same best-effort priority are served in a round-robin
fashion. Some kernel versions treat priority within the best-effort
class differently—for details, refer to the
ionice(1) man page.

Real-time

Processes in this class are always granted disk access first.
Fine-tune the priority level from 0 to
7 (with 0 being the highest
priority). Use with care, since it can starve other processes.

For more details and the exact command syntax refer to the
ionice(1) man page.

Kernel Control Groups (abbreviated known as “cgroups”) are a
kernel feature that allows aggregating or partitioning tasks (processes)
and all their children into hierarchical organized groups. These
hierarchical groups can be configured to show a specialized behavior that
helps with tuning the system to make best use of available hardware and
network resources.

Web browsers such as Firefox will be part of the Web network class, while
the NFS daemons such as (k)nfsd will be part of the NFS network class. On
the other side, Firefox will share appropriate CPU and memory classes
depending on whether a professor or student started it.

The Freezer subsystem is useful for high-performance computing
clusters (HPC clusters). Use it to freeze (stop) all tasks in a group
or to stop tasks, if they reach a defined checkpoint. For more
information, see
/usr/src/linux/Documentation/cgroups/freezer-subsystem.txt.

For more information about caveats, usage scenarios, and additional
parameters, see
/usr/src/linux/Documentation/cgroups/blkio-controller.txt.

Network Traffic (Resource Control)

With cgroup_tc, a network traffic controller
is available. It can be used to manage traffic that is associated with
the tasks in a cgroup. Additionally, cls_flow
can classify packets based on the tc_classid field
in the packet.

For example, to limit the traffic from all tasks from a
file_server cgroup to 100 Mbps, proceed as follows:

The kernel shipped with SUSE Linux Enterprise Server supports cgroups. There is no need
to apply additional patches. Execute lxc-checkconfig
to see a cgroups environment similar to the following output:

Power management aims at reducing operating costs for energy and cooling
systems while at the same time keeping the performance of a system at a
level that matches the current requirements. Thus, power management is
always a matter of balancing the actual performance needs and power
saving options for a system. Power management can be implemented and used
at different levels of the system. A set of specifications for power
management functions of devices and the operating system interface to
them has been defined in the Advanced Configuration and Power Interface
(ACPI). As power savings in server environments can primarily be achieved
on processor level, this chapter introduces some of the main concepts and
highlights some tools for analyzing and influencing relevant parameters.

At CPU level, you can control power usage in various ways: for example,
by using idling power states (C-states), changing CPU frequency
(P-states), and throttling the CPU (T-states). The following sections
give a short introduction to each approach and its significance for power
savings. Detailed specifications can be found at
http://www.acpi.info/spec.htm.

Modern processors have several power saving modes called
C-states. They reflect the capability of an idle
processor to turn off unused components in order to save power. Whereas
C-states have been available for laptops for some time, they are a
rather recent trend in the server market (for example, with Intel*
processors, C-modes are only available since
Nehalem).

When a processor runs in the C0 state, it is
executing instructions. A processor running in any other C-state is idle.
The higher the C number, the deeper the CPU sleep mode: more components are
shut down to save power. Deeper sleep states are very efficient concerning
power consumption in an idle system. But the downside is that they introduce
higher latency (the time the CPU needs to go back to C0).
Depending on the workload (threads waking up, triggering some CPU utilization
and then going back to sleep again for a short period of time) or hardware (for
example, interrupt activity of a network device), disabling the deepest sleep states
can significantly increase overall performance. For details on how
to do so, refer to Section 11.3.2.2, “Viewing and Modifying Kernel Idle Statistics with cpupower”.

Some states also have submodes with different power saving latency
levels. Which C-states and submodes are supported depends on the
respective processor. However, C1 is always
available.

Stops CPU main internal clocks via hardware. State where the
processor maintains all software-visible states, but may take
longer to wake up through interrupts.

C3

Stops all CPU internal clocks. The processor does not need to keep
its cache coherent, but maintains other states. Some processors
have variations of the C3 state that differ in how long it takes to
wake the processor through interrupts.

To avoid needless power consumption, it is recommended to test your
workloads with deep sleep states enabled versus deep sleep states disabled.
A recent maintenance update for SUSE Linux Enterprise Server 11 SP3 provides an updated cpupower package with an additional
cpupower subcommand. Use it to disable or enable
individual C-states, if necessary. For more information, refer to Section 11.3.2.2, “Viewing and Modifying Kernel Idle Statistics with cpupower” or the
cpupower-idle-set(1) man page.

While a processor operates (in C0 state), it can be in one of several
CPU performance states (P-states). Whereas C-states
are idle states (all but C0), P-states are
operational states that relate to CPU frequency and voltage.

The higher the P-state, the lower the frequency and voltage at which the
processor runs. The number of P-states is processor-specific and the
implementation differs across the various types. However,
P0 is always the highest-performance state. Higher
P-state numbers represent slower processor speeds and lower power
consumption. For example, a processor in P3 state runs more slowly and
uses less power than a processor running at P1 state. To operate at any
P-state, the processor must be in the C0 state where the processor is
working and not idling. The CPU P-states are also defined in the
Advanced Configuration and Power Interface (ACPI) specification, see
http://www.acpi.info/spec.htm.

T-states refer to throttling the processor clock to lower frequencies in
order to reduce thermal effects. This means that the CPU is forced to be
idle a fixed percentage of its cycles per second. Throttling states
range from T1 (the CPU has no forced idle cycles) to
Tn, with the percentage of
idle cycles increasing the greater n is.

Note that throttling does not reduce voltage and since the CPU is forced
to idle part of the time, processes will take longer to finish and will
consume more power instead of saving any power.

T-states are only useful if reducing thermal effects is the primary
goal. Since T-states can interfere with C-states (preventing the CPU
from reaching higher C-states), they can even increase power consumption
in a modern CPU capable of C-states.

Since quite some time, CPU power consumption and performance tuning is
not only about frequency scaling anymore. In modern processors, a
combination of different means is used to achieve the optimum balance
between performance and power savings: deep sleep states, traditional
dynamic frequency scaling and hidden boost frequencies. The turbo
features (Turbo CORE* or Turbo Boost*) of the latest AMD* or Intel*
processors allow to dynamically increase (boost) the clock speed of
active CPU cores while other cores are in deep sleep states. This
increases the performance of active threads while still complying to
Thermal Design Power (TDP) limits.

However, the conditions under which a CPU core may use turbo frequencies
are very architecture-specific. Learn how to evaluate the efficiency of
those new features in
Section 11.3.2, “Using the cpupower Tools”.

Processor performance states (P-states) and processor operating states
(C-states) are the capability of a processor to switch between different
supported operating frequencies and voltages to modulate power
consumption.

In order to dynamically scale processor frequencies at runtime, you can
use the CPUfreq infrastructure to set a static or dynamic power policy
for the system. Its main components are the CPUfreq subsystem
(providing a common interface to the various low-level technologies and
high-level policies) , the in-kernel governors (policy governors that can
change the CPU frequency based on different criteria) and CPU-specific
drivers that implement the technology for the specific processor.

The dynamic scaling of the clock speed helps to consume less power and
generate less heat when not operating at full capacity.

You can think of the in-kernel governors as a sort of pre-configured
power schemes for the CPU. The CPUfreq governors use P-states to
change frequencies and lower power consumption. The dynamic governors
can switch between CPU frequencies, based on CPU utilization to allow
for power savings while not sacrificing performance. These governors
also allow for some tuning so you can customize and change the frequency
scaling behavior.

The following governors are available with the CPUfreq subsystem:

Performance Governor

The CPU frequency is statically set to the highest possible for
maximum performance. Consequently, saving power is not the focus of
this governor.

Tuning options: The range of maximum frequencies available to the
governor can be adjusted (for example, with the
cpupower command line tool).

Powersave Governor

The CPU frequency is statically set to the lowest possible. This can
have severe impact on the performance, as the system will never rise
above this frequency no matter how busy the processors are.

However, using this governor often does not lead to the expected
power savings as the highest savings can usually be achieved at idle
through entering C-states. Due to running processes at the lowest
frequency with the powersave governor, processes will take longer to
finish, thus prolonging the time for the system to enter any idle
C-states.

Tuning options: The range of minimum frequencies available to the
governor can be adjusted (for example, with the
cpupower command line tool).

On-demand Governor

The kernel implementation of a dynamic CPU frequency policy: The
governor monitors the processor utilization. As soon as it exceeds a
certain threshold, the governor will set the frequency to the highest
available. If the utilization is less than the threshold, the next
lowest frequency is used. If the system continues to be
underemployed, the frequency is again reduced until the lowest
available frequency is set.

For SUSE Linux Enterprise, the on-demand governor is the default governor and the one
that has the best test coverage.

Tuning options: The range of available frequencies, the rate at which
the governor checks utilization, and the utilization threshold can be
adjusted. Another parameter you might want to change for the
on-demand governor is ignore_nice_load. For
details, refer to
Procedure 11.1, “Ignoring Nice Values in Processor Utilization”.

Conservative Governor

Similar to the on-demand implementation, this governor also
dynamically adjusts frequencies based on processor utilization,
except that it allows for a more gradual increase in power. If
processor utilization exceeds a certain threshold, the governor does
not immediately switch to the highest available frequency (as the
on-demand governor does), but only to next higher frequency
available.

Tuning options: The range of available frequencies, the rate at which
the governor checks utilization, the utilization thresholds, and the
frequency step rate can be adjusted.

If the CPUfreq subsystem in enabled on your system (which it is by
default with SUSE Linux Enterprise Server), you can find the relevant files and directories
under /sys/devices/system/cpu/. If you list the
contents of this directory, you will find a
cpu{0..x} subdirectory for each processor, and
several other files and directories. A cpufreq
subdirectory in each processor directory holds a number of files and
directories that define the parameters for CPUfreq. Some of them are
writable (for root), some of them are read-only. If your system
currently uses the on-demand or conservative governor, you will see a
separate subdirectory for those governors in
cpufreq, containing the parameters for the
governors.

Different Processor Settings

The settings under the cpufreq directory can be
different for each processor. If you want to use the same policies
across all processors, you need to adjust the parameters for each
processor. Instead of looking up or modifying the current settings
manually (in /sys/devices/system/cpu*/cpufreq), we
advise to use the tools provided by the
cpupower package
or by the older
cpufrequtils package
for that.

With the tools of the
cpufrequtils package
you can view and modify settings of the kernel-related CPUfreq
subsystem. The cpufreq* commands are useful for
modifying settings related to P-states, especially frequency scaling
and CPUfreq governors.

The new cpupower tool was designed to give an
overview of all CPU power-related parameters that
are supported on a given machine, including turbo (or boost) states.
Use the tool set to view and modify settings of the kernel-related
CPUfreq and cpuidle systems as well as other settings not related to
frequency scaling or idle states. The integrated monitoring framework
can access both Kernel-related parameters and hardware statistics and
is thus ideally suited for performance benchmarks. It also helps you
to identify the dependencies between turbo and idle states.

powerTOP combines various sources of information (analysis of
programs, device drivers, kernel options, amounts and sources of
interrupts waking up processors from sleep states) and shows them in
one screen. The tool helps you to identify the reasons for unnecessary
high power consumption (for example, processes that are mainly
responsible for waking up a processor from its idle state) and to
optimize your system settings to avoid these.

All functions of cpufrequtils are also covered by
cpupower—a new set of tools that is more
powerful and provides additional features. As
cpupower will replace
cpufrequtils sooner or later, we advise to switch to
cpupower soon and to adjust your scripts
accordingly.

After you have installed the
cpufrequtils package,
you can make use of the cpufreq-info and
cpufreq-set command line tools.

Using the appropriate options, you can view the current CPU frequency,
the minimum and maximum CPU frequency allowed, show the currently used
CPUfreq policy, the available CPUfreq governors, or determine the
CPUfreq kernel driver used. For more details and the available
options, refer to the cpufreq-info man page or run
cpufreq-info --help.

To modify CPUfreq settings, use the cpufreq-set
command as root. It allows you set values for the minimum or
maximum CPU frequency the governor may select or to create a new
governor. With the -c option, you can also specify for
which of the processors the settings should be modified. That makes it
easy to use a consistent policy across all processors without adjusting
the settings for each processor individually. For more details and the
available options, refer to the cpufreq-set man page
or run cpufreq-set --help.

After installing the
cpupower package, view
the available cpupower subcommands with
cpupower --help. Access the general man page
with man cpupower, and the man pages of the
subcommands with
man cpupower-subcommand.

The subcommands frequency-info and
frequency-set are mostly equivalent to
cpufreq-info and cpufreq-set,
respectively. However, they provide extended output and there are small
differences in syntax and behavior:

Syntax Differences Between cpufreq* and cpupower

To specify the number of the CPU to which the command is applied, both
commands have the -c option. Due to the
command-subcommand structure, the placement of the -c
option is different for cpupower:

cpupower -c 4 frequency-info (versus
cpufreq-info -c 4)

cpupower lets you also specify a list of CPUs with
-c. For example, the following command would affect
the CPUs 1 , 2,
3, and 5:

cpupower -c 1-3,5 frequency-set

If cpufreq* and cpupower are
used without the -c option, the behavior differs:

Similar to cpufreq-info,
cpupower frequency-info also shows the
statistics of the cpufreq driver used in the Kernel. Additionally, it
shows if turbo (boost) states are supported and enabled in the BIOS.
Run without any options, it shows an output similar to the following:

After finding out which processor idle states are supported with
cpupower idle-info, individual states can be disabled
using the cpupower idle-set command. Typically one wants
to disable the deepest sleep state, for example:

cpupower idle-set -d 4

But before making this change permanent by adding the corresponding
command to a current /etc/init.d/* service file, check for
performance or power impact.

The most powerful enhancement is the monitor
subcommand. Use it to report processor topology, and monitor frequency
and idle power state statistics over a certain period of time. The
default interval is 1 second, but it can be changed
with the -i. Independent processor sleep states and
frequency counters are implemented in the tool—some retrieved
from kernel statistics, others reading out hardware registers. The
available monitors depend on the underlying hardware and the system.
List them with cpupower monitor -l. For a
description of the individual monitors, refer to the cpupower-monitor
man page.

The monitor subcommand allows you to execute
performance benchmarks and to compare Kernel statistics with hardware
statistics for specific workloads.

Mperf shows the average frequency of a CPU, including boost
frequencies, over a period of time. Additionally, it shows the
percentage of time the CPU has been active (C0)
or in any sleep state (Cx). The default sampling
rate is 1 second and the values are read directly
from the hardware registers. As the turbo states are managed by the
BIOS, it is impossible to get the frequency values at a given
instant. On modern processors with turbo features the Mperf monitor
is the only way to find out about the frequency a certain CPU has
been running in.

Idle_Stats shows the statistics of the cpuidle kernel subsystem. The
kernel updates these values every time an idle state is entered or
left. Therefore there can be some inaccuracy when cores are in an
idle state for some time when the measure starts or ends.

Apart from the (general) monitors in the example above, other
architecture-specific monitors are available. For detailed
information, refer to the cpupower-monitor man
page.

By comparing the values of the individual monitors, you can find
correlations and dependencies and evaluate how well the power saving
mechanism works for a certain workload. In
Example 11.4 you can
see that CPU 0 is idle (the value of
Cx is near to 100%), but runs at a very high
frequency. Additionally, the CPUs 0 and
1 have the same frequency values which means that
there is a dependency between them.

Similar to cpufreq-set, you can use
cpupower frequency-set command as root
to modify current settings. It allows you to set values for the minimum
or maximum CPU frequency the governor may select or to create a new
governor. With the -c option, you can also specify for
which of the processors the settings should be modified. That makes it
easy to use a consistent policy across all processors without adjusting
the settings for each processor individually. For more details and the
available options, refer to the
cpupower-freqency-set man page or run
cpupower frequency-set --help.

Another useful tool for monitoring system power consumption is
powerTOP. It helps you to identify the reasons for unnecessary high
power consumption (for example, processes that are mainly responsible
for waking up a processor from its idle state) and to optimize your
system settings to avoid these. It supports both Intel and AMD
processors. The powertop
package is available from the SUSE Linux Enterprise SDK. For information on how to
access the SDK, refer to About This Guide.

powerTOP combines various sources of information (analysis of
programs, device drivers, kernel options, amounts and sources of
interrupts waking up processors from sleep states) and shows them in one
screen. Example 11.5, “Example powerTOP Output” shows which
information categories are available:

The column shows the C-states. When working, the CPU is in state
0, when resting it is in some state greater than
0, depending on which C-states are available and
how deep the CPU is sleeping.

The column shows average time in milliseconds spent in the particular
C-state.

The column shows the percentages of time spent in various C-states.
For considerable power savings during idle, the CPU should be in
deeper C-states most of the time. In addition, the longer the average
time spent in these C-states, the more power is saved.

The column shows the frequencies the processor and kernel driver
support on your system.

The column shows the amount of time the CPU cores stayed in different
frequencies during the measuring period.

Shows how often the CPU is awoken per second (number of interrupts).
The lower the number the better. The interval
value is the powerTOP refresh interval which can be controlled with
the -t option. The default time to gather data is 5
seconds.

When running powerTOP on a laptop, this line displays the ACPI
information on how much power is currently being used and the
estimated time until discharge of the battery. On servers, this
information is not available.

Shows what is causing the system to be more active than needed.
powerTOP displays the top items causing your CPU to awake during
the sampling period.

The CPUfreq subsystem offers several tuning options for P-states: You
can switch between the different governors, influence minimum or maximum
CPU frequency to be used or change individual governor parameters.

To switch to another governor at runtime, use
cpupower frequency-set(or cpufreq-set) with
the -g option. For example, running the following
command (as root) will activate the on-demand governor:

One parameter you might want to change for the on-demand or
conservative governor is ignore_nice_load.

Each process has a niceness value associated with it. This value is
used by the kernel to determine which processes require more processor
time than others. The higher the nice value, the lower the priority of
the process. Or: the “nicer” a process, the less CPU it
will try to take from other processes.

If the ignore_nice_load parameter for the on-demand
or conservative governor is set to 1, any processes
with a nice value will not be counted toward the
overall processor utilization. When ignore_nice_load
is set to 0 (default value), all processes are
counted toward the utilization. Adjusting this parameter can be useful
if you are running something that requires a lot of processor capacity
but you do not care about the runtime.

Change to the subdirectory of the governor whose settings you want to
modify, for example:

cd /sys/devices/system/cpu/cpu0/cpufreq/conservative/

Show the current value of ignore_nice_load with:

cat ignore_nice_load

To set the value to 1, execute:

echo 1 > ignore_nice_load

Using the Same Value for All Cores

When setting the ignore_nice_load value for
cpu0, the same value is automatically used for all
cores. In this case, you do not need to repeat the steps above for each
of the processors where you want to modify this governor parameter.

Another parameter that significantly impacts the performance loss caused
by dynamic frequency scaling is the sampling rate (rate at which the
governor checks the current CPU load and adjusts the processor's
frequency accordingly). Its default value depends on a BIOS value and it
should be as low as possible. However, in modern systems, an appropriate
sampling rate is set by default and does not need manual intervention.

By default, SUSE Linux Enterprise Server uses C-states appropriately. The only
parameter you might want to touch for optimization is the
sched_mc_power_savings scheduler. Instead of
distributing a work load across all cores with the effect that all cores
are used only at a minimum level, the kernel can try to schedule
processes on as few cores as possible so that the others can go idle.
This helps to save power as it allows some processors to be idle for a
longer time so they can reach a higher C-state. However, the actual
savings depend on a number of factors, for example how many processors
are available and which C-states are supported by them (especially
deeper ones such as C3 to C6).

If sched_mc_power_savings is set to
0 (default value), no special scheduling is done. If
it is set to 1, the scheduler tries to consolidate
the work onto the fewest number of processors possible in the case that
all processors are a little busy.
To modify this parameter, proceed as follows:

SUSE Linux Enterprise Server includes pm-profiler, intended for server use. It is a
script infrastructure to enable or disable certain power management
functions via configuration files. It allows you to define different
profiles, each having a specific configuration file for defining
different settings. A configuration template for new profiles can be
found at
/usr/share/doc/packages/pm-profiler/config.template.
The template contains a number of parameters you can use for your
profile, including comments on usage and links to further documentation.
The individual profiles are stored in
/etc/pm-profiler/. The profile that will be
activated on system start, is defined in
/etc/pm-profiler.conf.

Edit the settings in
/etc/pm-profiler/testprofile/config and save the
file. You can also remove variables that you do not need—they
will be handled like empty variables, the settings will not be touched
at all.

Edit /etc/pm-profiler.conf. The
PM_PROFILER_PROFILE variable defines which
profile will be activated on system start. If it has no value, the
default system or kernel settings will be used. To set the newly
created profile:

PM_PROFILER_PROFILE="testprofile"

The profile name you enter here must match the name you used in the
path to the profile configuration file
(/etc/pm-profiler/testprofile/config), not
necessarily the NAME you used for the profile in the
/etc/pm-profiler/testprofile/config.

To activate the profile, run

rcpm-profiler start

or

/usr/lib/pm-profiler/enable-profile testprofile

Though you have to manually create or modify a profile by editing the
respective profile configuration file, you can use YaST to switch
between different profiles. Start YaST and select System+Power Management to open the Power Management Settings.
Alternatively, become root and execute yast2
power-management on a command line. The drop-down list shows
the available profiles. Default means that the system
default settings will be kept. Select the profile to use and click
Finish.

In order to make use of C-states or P-states, check your BIOS options:

To use C-states, make sure to enable CPU C State
or similar options to benefit from power savings at idle.

To use P-states and the CPUfreq governors, make sure to enable
Processor Performance States options or similar.

In case of a CPU upgrade, make sure to upgrade your BIOS, too. The
BIOS needs to know the new CPU and its valid frequencies steps in
order to pass this information on to the operating system.

CPUfreq subsystem enabled?

In SUSE Linux Enterprise Server, the CPUfreq subsystem is enabled by default. To
find out if the subsystem is currently enabled, check for the
following path in your system:
/sys/devices/system/cpu/cpufreq (or
/sys/devices/system/cpu/cpu*/cpufreq for machines
with multiple cores). If the cpufreq subdirectory
exists, the subsystem is enabled.

If you suspect problems with the CPUfreq subsystem on your machine,
you can also enable additional debug output. To do so, either use
cpufreq.debug=7 as boot parameter or execute the
following command as root:

echo 7 > /sys/module/cpufreq/parameters/debug

This will cause CPUfreq to log more information to
dmesg on state transitions, which is useful for
diagnosis. But as this additional output of kernel messages can be
rather comprehensive, use it only if you are fairly sure that a
problem exists.

Platforms with a Baseboard Management Controller (BMC) may have
additional power management configuration options accessible via the
service processor. These configurations are vendor specific and
therefore not subject of this guide. For more information, refer to the
manuals provided by your vendor. For example, HP ProLiant
Server Power Management on SUSE Linux Enterprise Server
11—Integration Note provides detailed information
how the HP platform specific power management features interact with
the Linux Kernel. The paper is available from
http://h18004.www1.hp.com/products/servers/technology/whitepapers/os-techwp.html.

SUSE Linux Enterprise Server supports the parallel installation of multiple kernel
versions. When installing a second kernel, a boot entry and an initrd are
automatically created, so no further manual configuration is needed. When
rebooting the machine, the newly added kernel is available as an
additional boot option.

Using this functionality, you can safely test kernel updates while being
able to always fall back to the proven former kernel. To do so, do not
use the update tools (such as the YaST Online Update or the updater
applet), but instead follow the process described in this chapter.

Support Entitlement

Please be aware that you loose your entire support entitlement for the
machine when installing a self-compiled or a third-party kernel. Only
kernels shipped with SUSE Linux Enterprise Server and kernels delivered via the official
update channels for SUSE Linux Enterprise Server are supported.

Check Your Bootloader Configuration Kernel

It is recommended to check your boot loader config after having installed
another kernel in order to set the default boot entry of your choice. See
Section “Configuring the Boot Loader with YaST” (Chapter 10, The Boot Loader GRUB, ↑Administration Guide) for more information. To change
the default append line for new kernel installations, adjust
/etc/sysconfig/bootloader prior to installing a new
kernel. For more information refer to
Section “The File /etc/sysconfig/bootloader” (Chapter 10, The Boot Loader GRUB, ↑Administration Guide).

When frequently testing new kernels with multiversion support enabled,
the boot menu quickly becomes confusing. Since a
/boot usually has got limited space you also might
run into trouble with /boot overflowing. While you
may delete unused kernel versions manually with YaST or Zypper (as
described below), you can also configure
libzypp to automatically delete
kernels no longer used. By default no kernels are deleted.

Open /etc/zypp/zypp.conf with the editor of your
choice as root.

Search for the string multiversion.kernels and
activate this option by uncommenting the line. This option takes a
comma separated list of the following values

2.6.32.12-0.7 :
keep the kernel with the specified version number

latest:
keep the kernel with the highest version number

latest-N:
keep the kernel with the Nth highest version number

running.
keep the running kernel

oldest.
keep the kernel with the lowest version number (the one that was
originally shipped with SUSE Linux Enterprise Server)

oldest+N.
keep the kernel with the Nth lowest version number

Here are some examples

multiversion.kernels = latest,running

Keep the latest kernel and the one currently running one. This is
similar to not enabling the multiversion feature at all, except
that the old kernel is removed after the next
reboot and not immediately after the installation.

multiversion.kernels = latest,latest-1,running

Keep the last two kernels and the one currently running.

multiversion.kernels = latest,running,3.0.rc7-test

Keep the latest kernel, the one currently running and
3.0.rc7-test.

Keep the running Kernel

Unless using special setups, you probably always want to keep the
running kernel.

I/O scheduling controls how input/output operations will be submitted to
storage. SUSE Linux Enterprise Server offers various I/O algorithms—called
elevators— suiting different workloads. Elevators
can help to reduce seek operations, can prioritize I/O requests, or make
sure, and I/O request is carried out before a given deadline.

Choosing the best suited I/O elevator not only depends on the workload,
but on the hardware, too. Single ATA disk systems, SSDs, RAID arrays, or
network storage systems, for example, each require different tuning
strategies.

SUSE Linux Enterprise Server lets you set a default I/O scheduler at boot-time, which
can be changed on the fly per block device. This makes it possible to set
different algorithms for e.g. the device hosting the system partition and
the device hosting a database.

By default the CFQ (Completely
Fair Queuing) scheduler is used. Change this default by entering the boot
parameter

CFQ is a fairness-oriented
scheduler and is used by default on SUSE Linux Enterprise Server. The algorithm assigns
each thread a time slice in which it is allowed to submit I/O to disk.
This way each thread gets a fair share of I/O throughput. It also allows
assigning tasks I/O priorities which are taken into account during
scheduling decisions (see man 1 ionice). The
CFQ scheduler has the
following tunable parameters:

/sys/block/<device>/queue/iosched/slice_idle

When a task has no more I/O to submit in its time slice, the I/O
scheduler waits for a while before scheduling the next thread to
improve locality of I/O. For media where locality does not play a big
role (SSDs, SANs with lots of disks) setting
/sys/block/<device>/queue/iosched/slice_idle
to 0 can improve the throughput considerably.

/sys/block/<device>/queue/iosched/quantum

This option limits the maximum number of requests that are being
processed by the device at once. The default value is
4. For a storage with several disks, this setting
can unnecessarily limit parallel processing of requests. Therefore,
increasing the value can improve performance although this can cause
that the latency of some I/O may be increased due to more requests
being buffered inside the storage. When changing this value, you can
also consider tuning
/sys/block/<device>/queue/iosched/slice_async_rq
(the default value is 2) which limits the maximum
number of asynchronous requests—usually writing
requests—that are submitted in one time slice.

/sys/block/<device>/queue/iosched/low_latency

For workloads where the latency of I/O is crucial, setting
/sys/block/<device>/queue/iosched/low_latency
to 1 can help.

A trivial scheduler that just passes down the I/O that comes to it.
Useful for checking whether complex I/O scheduling decisions of other
schedulers are not causing I/O performance regressions.

In some cases it can be helpful for devices that do I/O scheduling
themselves, as intelligent storage, or devices that do not depend on
mechanical movement, like SSDs. Usually, the
DEADLINE I/O scheduler is
a better choice for these devices, but due to less overhead
NOOP may produce better
performance on certain workloads.

DEADLINE is a latency-oriented
I/O scheduler. Each I/O request has got a deadline assigned. Usually,
requests are stored in queues (read and write) sorted by sector numbers.
The DEADLINE algorithm
maintains two additional queues (read and write) where the requests are
sorted by deadline. As long as no request has timed out, the
“sector” queue is used. If timeouts occur, requests from
the “deadline” queue are served until there are no more
expired requests. Generally, the algorithm prefers reads over writes.

This scheduler can provide a superior throughput over the
CFQ I/O scheduler in
cases where several threads read and write and fairness is not an issue.
For example, for several parallel readers from a SAN and for databases
(especially when using “TCQ” disks). The
DEADLINE scheduler has
the following tunable parameters:

/sys/block/<device>/queue/iosched/writes_starved

Controls how many reads can be sent to disk before it is possible to
send writes. A value of 3 means, that three read
operations are carried out for one write operation.

/sys/block/<device>/queue/iosched/read_expire

Sets the deadline (current time plus the read_expire value) for read
operations in milliseconds. The default is 500.

/sys/block/<device>/queue/iosched/write_expire

/sys/block/<device>/queue/iosched/read_expire
Sets the deadline (current time plus the read_expire value) for read
operations in milliseconds. The default is 500.

Most file systems (XFS, ext3, ext4, reiserfs) send write barriers to disk
after fsync or during transaction commits. Write barriers enforce proper
ordering of writes, making volatile disk write caches safe to use (at
some performance penalty). If your disks are battery-backed in one way or
another, disabling barriers may safely improve performance.

Sending write barriers can be disabled using the
barrier=0 mount option (for ext3, ext4, and reiserfs),
or using the nobarrier mount option (for XFS).

Disabling barriers when disks cannot guarantee caches are properly
written in case of power failure can lead to severe file system
corruption and data loss.

Modern operating systems, such as SUSE® Linux Enterprise Server, normally run many
different tasks at the same time. For example, you can be searching in a
text file while receiving an e-mail and copying a big file to an external
hard drive. These simple tasks require many additional processes to be run
by the system. To provide each task with its required system resources,
the Linux kernel needs a tool to distribute available system resources to
individual tasks. And this is exactly what the task
scheduler does.

The following sections explain the most important terms related to a
process scheduling. They also introduce information about the task
scheduler policy, scheduling algorithm, description of the task scheduler
used by SUSE Linux Enterprise Server, and references to other sources of relevant
information.

The Linux kernel controls the way tasks (or processes) are managed in the
running system. The task scheduler, sometimes called process
scheduler, is the part of the kernel that decides which task
to run next. It is one of the core components of a multitasking operating
system (such as Linux), being responsible for best utilizing system
resources to guarantee that multiple tasks are being executed
simultaneously.

The theory behind task scheduling is very simple. If there are runnable
processes in a system, at least one process must always be running. If
there are more runnable processes than processors in a system, not all
the processes can be running all the time.

Therefore, some processes need to be stopped temporarily, or
suspended, so that others can be running again. The
scheduler decides what process in the queue will run next.

As already mentioned, Linux, like all other Unix variants, is a
multitasking operating system. That means that
several tasks can be running at the same time. Linux provides a so
called preemptive multitasking, where the scheduler
decides when a process is suspended. This forced suspension is called
preemption. All Unix flavors have been providing
preemptive multitasking since the beginning.

The time period for which a process will be running before it is
preempted is defined in advance. It is called a
process' timeslice and represents the amount of
processor time that is provided to each process. By assigning
timeslices, the scheduler makes global decisions for the running system,
and prevents individual processes from dominating over the processor
resources.

The scheduler evaluates processes based on their priority. To calculate
the current priority of a process, the task scheduler uses complex
algorithms. As a result, each process is given a value according to
which it is “allowed” to run on a processor.

Processes are usually classified according to their purpose and behavior.
Although the borderline is not always clearly distinct, generally two
criteria are used to sort them. These criteria are independent and do not
exclude each other.

One approach is to classify a process either
I/O-bound or processor-bound.

I/O-bound

I/O stands for Input/Output devices, such as keyboards, mice, or
optical and hard disks. I/O-bound processes spend
the majority of time submitting and waiting for requests. They are run
very frequently, but for short time intervals, not to block other
processes waiting for I/O requests.

processor-bound

On the other hand, processor-bound tasks use
their time to execute a code, and usually run until they are preempted
by the scheduler. They do not block processes waiting for I/O
requests, and, therefore, can be run less frequently but for longer
time intervals.

Another approach is to divide processes by either being
interactive, batch, or
real-time ones.

Interactive processes spend a lot of time waiting
for I/O requests, such as keyboard or mouse operations. The scheduler
must wake up such process quickly on user request, or the user will
find the environment unresponsive. The typical delay is approximately
100 ms. Office applications, text editors or image manipulation
programs represent typical interactive processes.

Batch processes often run in the background and do
not need to be responsive. They usually receive lower priority from the
scheduler. Multimedia converters, database search engines, or log files
analyzers are typical examples of batch processes.

Real-time processes must never be blocked by
low-priority processes, and the scheduler guarantees a short response
time to them. Applications for editing multimedia content are a good
example here.

The Linux kernel version 2.6 introduced a new task scheduler, called O(1)
scheduler (see
Big O
notation), It was used as the default scheduler up to Kernel
version 2.6.22. Its main task is to schedule tasks within a fixed amount
of time, no matter how many runnable processes there are in the system.

The scheduler calculates the timeslices dynamically. However, to
determine the appropriate timeslice is a complex task: Too long
timeslices cause the system to be less interactive and responsive, while
too short ones make the processor waste a lot of time on the overhead of
switching the processes too frequently. The default timeslice is usually
rather low, for example 20ms. The scheduler determines the timeslice
based on priority of a process, which allows the processes with higher
priority to run more often and for a longer time.

A process does not have to use all its timeslice at once. For
instance, a process with a timeslice of 150ms does not have to be running
for 150ms in one go. It can be running in five different schedule slots
for 30ms instead. Interactive tasks typically benefit from this approach
because they do not need such a large timeslice at once while they need
to be responsive as long as possible.

The scheduler also assigns process priorities dynamically. It monitors
the processes' behavior and, if needed, adjusts its priority. For
example, a process which is being suspended for a long time is brought up
by increasing its priority.

Since the Linux kernel version 2.6.23, a new approach has been taken to
the scheduling of runnable processes. Completely Fair Scheduler (CFS)
became the default Linux kernel scheduler. Since then, important changes
and improvements have been made. The information in this chapter applies
to SUSE Linux Enterprise Server with kernel version 2.6.32 and higher (including 3.x
kernels). The scheduler environment was divided into several parts, and
three main new features were introduced:

Modular Scheduler Core

The core of the scheduler was enhanced with scheduling
classes. These classes are modular and represent scheduling
policies.

Completely Fair Scheduler

Introduced in kernel 2.6.23 and extended in 2.6.24, CFS tries to
assure that each process obtains its “fair” share of the
processor time.

Group Scheduling

For example, if you split processes into groups according to which
user is running them, CFS tries to provide each of these groups with
the same amount of processor time.

As a result, CFS brings more optimized scheduling for both servers and
desktops.

CFS tries to guarantee a fair approach to each runnable task. To find
the most balanced way of task scheduling, it uses the concept of
red-black tree. A red-black tree is a type of
self-balancing data search tree which provides inserting and removing
entries in a reasonable way so that it remains well balanced. For more
information, see the wiki pages of
Red-black
tree.

When a task enters into the run queue (a planned
time line of processes to be executed next), the scheduler records the
current time. While the process waits for processor time, its
“wait” value gets incremented by an amount derived from the
total number of tasks currently in the run queue and the process
priority. As soon as the processor runs the task, its
“wait” value gets decremented. If the value drops below a
certain level, the task is preempted by the scheduler and other tasks
get closer to the processor. By this algorithm, CFS tries to reach the
ideal state where the “wait” value is always zero.

Since the Linux kernel version 2.6.24, CFS can be tuned to be fair to
users or groups rather than to tasks only. Runnable tasks are then
grouped to form entities, and CFS tries to be fair to these entities
instead of individual runnable tasks. The scheduler also tries to be
fair to individual tasks within these entities.

Tasks can be grouped in two mutually exclusive ways:

By user IDs

By kernel control groups.

The way the kernel scheduler lets you group the runnable tasks depends
on setting the kernel compile-time options
CONFIG_FAIR_USER_SCHED and
CONFIG_FAIR_CGROUP_SCHED. The default setting in
SUSE® Linux Enterprise Server11 SP3 is to use control groups, which lets
you create groups as needed. For more information, see
Chapter 10, Kernel Control Groups.

Basic aspects of the task scheduler behavior can be set through the
kernel configuration options. Setting these options is part of the
kernel compilation process. Because kernel compilation process is a
complex task and out of this document's scope, refer to relevant source
of information.

Kernel Compilation

If you run SUSE Linux Enterprise Server on a kernel that was not shipped with it, for
example on a self-compiled kernel, you loose the entire support
entitlement.

The chrt command sets or retrieves the real-time
scheduling attributes of a running process, or runs a command with the
specified attributes. You can get or retrieve both the scheduling policy
and priority of a process.

The sysctl interface for examining and changing
kernel parameters at runtime introduces important variables by means of
which you can change the default behavior of the task scheduler. The
syntax of the sysctl is simple, and all the following
commands must be entered on the command line as root.

Note that variables ending with “_ns” and
“_us” accept values in nanoseconds and microseconds,
respectively.

A list of the most important task scheduler sysctl
tuning variables (located at /proc/sys/kernel/)
with a short description follows:

sched_child_runs_first

A freshly forked child runs before the parent continues execution.
Setting this parameter to 1 is beneficial for an
application in which the child performs an execution after fork. For
example make
-j<NO_CPUS>
performs better when sched_child_runs_first is turned off. The
default value is 0.

sched_compat_yield

Enables the aggressive yield behavior of the old 0(1) scheduler. Java
applications that use synchronization extensively perform better with
this value set to 1. Only use it when you see a
drop in performance. The default value is 0.

Expect applications that depend on the sched_yield() syscall behavior
to perform better with the value set to 1.

sched_migration_cost

Amount of time after the last execution that a task is considered to
be “cache hot” in migration decisions. A
“hot” task is less likely to be migrated, so increasing
this variable reduces task migrations. The default value is
500000 (ns).

If the CPU idle time is higher than expected when there are runnable
processes, try reducing this value. If tasks bounce between CPUs or
nodes too often, try increasing it.

timeslice = scheduling period * (task's weight/total weight of tasks
in the run queue)

The task's weight depends on the task's nice level and the scheduling
policy. Minimum task weight for a SCHED_OTHER task is 15,
corresponding to nice 19. The maximum task weight is 88761,
corresponding to nice -20.

Timeslices become smaller as the load increases. When the number of
runnable tasks exceeds
sched_latency_ns/sched_min_granularity_ns,
the slice becomes number_of_running_tasks *
sched_min_granularity_ns. Prior to that, the
slice is equal to sched_latency_ns.

This value also specifies the maximum amount of time during which a
sleeping task is considered to be running for entitlement
calculations. Increasing this variable increases the amount of time a
waking task may consume before being preempted, thus increasing
scheduler latency for CPU bound tasks. The default value is
20000000 (ns).

Settings larger than half of
sched_latency_ns will result in zero
wake-up preemption and short duty cycle tasks will be unable to
compete with CPU hogs effectively.

sched_rt_period_us

Period over which real-time task bandwidth enforcement is measured.
The default value is 1000000 (µs).

sched_rt_runtime_us

Quantum allocated to real-time tasks during sched_rt_period_us.
Setting to -1 disables RT bandwidth enforcement. By default, RT tasks
may consume 95%CPU/sec, thus leaving 5%CPU/sec or 0.05s to be used by
SCHED_OTHER tasks.

sched_features

Provides information about specific debugging features.

sched_stat_granularity_ns

Specifies the granularity for collecting task scheduler statistics.

sched_nr_migrate

Controls how many tasks can be moved across processors through
migration software interrupts (softirq). If a large number of tasks
is created by SCHED_OTHER policy, they will all be run on the same
processor. The default value is 32. Increasing
this value gives a performance boost to large SCHED_OTHER threads at
the expense of increased latencies for real-time tasks.

CFS comes with a new improved debugging interface, and provides runtime
statistics information. Relevant files were added to the
/proc file system, which can be examined simply
with the cat or less command. A
list of the related /proc files follows with their
short description:

/proc/sched_debug

Contains the current values of all tunable variables (see
Section 14.4.6, “Runtime Tuning with sysctl”) that affect
the task scheduler behavior, CFS statistics, and information about
the run queue on all available processors.

Displays statistics relevant to the current run queue. Also
domain-specific statistics for SMP systems are displayed for all
connected processors. Because the output format is not user-friendly,
read the contents of
/usr/src/linux/Documentation/scheduler/sched-stats.txt
for more information.

In order to understand and tune the memory management behavior of the
kernel, it is important to first have an overview of how it works and
cooperates with other subsystems.

The memory management subsystem, also called the virtual memory manager,
will subsequently be referred to as “VM”. The role of the VM
is to manage the allocation of physical memory (RAM) for the entire kernel
and user programs. It is also responsible for providing a virtual memory
environment for user processes (managed via POSIX APIs with Linux
extensions). Finally, the VM is responsible for freeing up RAM when there
is a shortage, either by trimming caches or swapping out
“anonymous” memory.

The most important thing to understand when examining and tuning VM is how
its caches are managed. The basic goal of the VM's caches is to minimize
the cost of I/O as generated by swapping and file system operations
(including network file systems). This is achieved by avoiding I/O
completely, or by submitting I/O in better patterns.

Free memory will be used and filled up by these caches as required. The
more memory is available for caches and anonymous memory, the more
effectively caches and swapping will operate. However, if a memory
shortage is encountered, caches will be trimmed or memory will be swapped
out.

For a particular workload, the first thing that can be done to improve
performance is to increase memory and reduce the frequency that memory
must be trimmed or swapped. The second thing is to change the way caches
are managed by changing kernel parameters.

Finally, the workload itself should be examined and tuned as well. If an
application is allowed to run more processes or threads, effectiveness of
VM caches can be reduced, if each process is operating in its own area of
the file system. Memory overheads are also increased. If applications
allocate their own buffers or caches, larger caches will mean that less
memory is available for VM caches. However, more processes and threads can
mean more opportunity to overlap and pipeline I/O, and may take better
advantage of multiple cores. Experimentation will be required for the best
results.

Anonymous memory tends to be program heap and stack memory (for example,
>malloc()). It is reclaimable, except in special
cases such as mlock or if there is no available swap
space. Anonymous memory must be written to swap before it can be
reclaimed. Swap I/O (both swapping in and swapping out pages) tends to
be less efficient than pagecache I/O, due to allocation and access
patterns.

A cache of file data. When a file is read from disk or network, the
contents are stored in pagecache. No disk or network access is required,
if the contents are up-to-date in pagecache. tmpfs and shared memory
segments count toward pagecache.

When a file is written to, the new data is stored in pagecache before
being written back to a disk or the network (making it a write-back
cache). When a page has new data not written back yet, it is called
“dirty”. Pages not classified as dirty are
“clean”. Clean pagecache pages can be reclaimed if there is
a memory shortage by simply freeing them. Dirty pages must first be made
clean before being reclaimed.

This is a type of pagecache for block devices (for example, /dev/sda). A
file system typically uses the buffercache when accessing its on-disk
“meta-data” structures such as inode tables, allocation
bitmaps, and so forth. Buffercache can be reclaimed similarly to
pagecache.

As applications write to files, the pagecache (and buffercache) becomes
dirty. When pages have been dirty for a given amount of time, or when
the amount of dirty memory reaches a particular percentage of RAM, the
kernel begins writeback. Flusher threads perform writeback in the
background and allow applications to continue running. If the I/O cannot
keep up with applications dirtying pagecache, and dirty data reaches a
critical percentage of RAM, then applications begin to be throttled to
prevent dirty data exceeding this threshold.

The VM monitors file access patterns and may attempt to perform
readahead. Readahead reads pages into the pagecache from the file system
that have not been requested yet. It is done in order to allow fewer,
larger I/O requests to be submitted (more efficient). And for I/O to be
pipelined (I/O performed at the same time as the application is
running).

This is an in-memory cache of the directory entries in the system.
These contain a name (the name of a file), the inode which it refers
to, and children entries. This cache is used when traversing the
directory structure and accessing a file by name.

To restore a SUSE Linux Enterprise Server 10-like behavior, M_MMAP_THRESHOLD should be
set to 128*1024. This can be done with mallopt() call from the
application, or via setting MALLOC_MMAP_THRESHOLD environment variable
before running the application.

Kernel memory that is reclaimable (caches, described above) will be
trimmed automatically during memory shortages. Most other kernel memory
cannot be easily reduced but is a property of the workload given to the
kernel.

Reducing the requirements of the userspace workload will reduce the
kernel memory usage (fewer processes, fewer open files and sockets,
etc.)

When tuning the VM it should be understood that some of the changes will
take time to affect the workload and take full effect. If the workload
changes throughout the day, it may behave very differently at different
times. A change that increases throughput under some conditions may
decrease it under other conditions.

This control is used to define how aggressively the kernel swaps out
anonymous memory relative to pagecache and other caches. Increasing
the value increases the amount of swapping. The default value is
60.

Swap I/O tends to be much less efficient than other I/O. However,
some pagecache pages will be accessed much more frequently than less
used anonymous memory. The right balance should be found here.

If swap activity is observed during slowdowns, it may be worth
reducing this parameter. If there is a lot of I/O activity and the
amount of pagecache in the system is rather small, or if there are
large dormant applications running, increasing this value might
improve performance.

Note that the more data is swapped out, the longer the system will
take to swap data back in when it is needed.

/proc/sys/vm/vfs_cache_pressure

This variable controls the tendency of the kernel to reclaim the
memory which is used for caching of VFS caches, versus pagecache and
swap. Increasing this value increases the rate at which VFS caches
are reclaimed.

It is difficult to know when this should be changed, other than by
experimentation. The slabtop command (part of the
package procps) shows top
memory objects used by the kernel. The vfs caches are the "dentry"
and the "*_inode_cache" objects. If these are consuming a large
amount of memory in relation to pagecache, it may be worth trying to
increase pressure. Could also help to reduce swapping. The default
value is 100.

/proc/sys/vm/min_free_kbytes

This controls the amount of memory that is kept free for use by
special reserves including “atomic” allocations (those
which cannot wait for reclaim). This should not normally be lowered
unless the system is being very carefully tuned for memory usage
(normally useful for embedded rather than server applications). If
“page allocation failure” messages and stack traces are
frequently seen in logs, min_free_kbytes could be increased until the
errors disappear. There is no need for concern, if these messages are
very infrequent. The default value depends on the amount of RAM.

One important change in writeback behavior since SUSE Linux Enterprise Server 10 is
that modification to file-backed mmap() memory is accounted immediately
as dirty memory (and subject to writeback). Whereas previously it would
only be subject to writeback after it was unmapped, upon an msync()
system call, or under heavy memory pressure.

Some applications do not expect mmap modifications to be subject to such
writeback behavior, and performance can be reduced. Berkeley DB (and
applications using it) is one known example that can cause problems.
Increasing writeback ratios and times can improve this type of slowdown.

/proc/sys/vm/dirty_background_ratio

This is the percentage of the total amount of free and reclaimable
memory. When the amount of dirty pagecache exceeds this percentage,
writeback threads start writing back dirty memory. The default value
is 10 (%).

/proc/sys/vm/dirty_ratio

Similar percentage value as above. When this is exceeded,
applications that want to write to the pagecache are blocked and
start performing writeback as well. The default value is
40 (%).

These two values together determine the pagecache writeback behavior. If
these values are increased, more dirty memory is kept in the system for
a longer time. With more dirty memory allowed in the system, the chance
to improve throughput by avoiding writeback I/O and to submitting more
optimal I/O patterns increases. However, more dirty memory can either
harm latency when memory needs to be reclaimed or at data integrity
(sync) points when it needs to be written back to disk.

If one or more processes are sequentially reading a file, the kernel
reads some data in advance (ahead) in order to reduce the amount of
time that processes have to wait for data to be available. The actual
amount of data being read in advance is computed dynamically, based
on how much "sequential" the I/O seems to be. This parameter sets the
maximum amount of data that the kernel reads ahead for a single file.
If you observe that large sequential reads from a file are not fast
enough, you can try increasing this value. Increasing it too far may
result in readahead thrashing where pagecache used for readahead is
reclaimed before it can be used, or slowdowns due to a large amount
of useless I/O. The default value is 512 (kb).

Another increasingly important role of the VM is to provide good NUMA
allocation strategies. NUMA stands for non-uniform memory access, and
most of today's multi-socket servers are NUMA machines. NUMA is a
secondary concern to managing swapping and caches in terms of
performance, and there are lots of documents about improving NUMA memory
allocations. One particular parameter interacts with page reclaim:

/proc/sys/vm/zone_reclaim_mode

This parameter controls whether memory reclaim is performed on a local
NUMA node even if there is plenty of memory free on other nodes. This
parameter is automatically turned on on machines with more pronounced
NUMA characteristics.

If the VM caches are not being allowed to fill all of memory on a NUMA
machine, it could be due to zone_reclaim_mode being set. Setting to 0
will disable this behavior.

slabtop: This tool provides detailed information
about kernel slab memory usage. buffer_head, dentry, inode_cache,
ext3_inode_cache, etc. are the major caches. This command is available
with the package procps.

The network subsystem is rather complex and its tuning highly depends on
the system use scenario and also on external factors such as software
clients or hardware components (switches, routers, or gateways) in your
network. The Linux kernel aims more at reliability and low latency than
low overhead and high throughput. Other settings can mean less security,
but better performance.

Networking is largely based on the TCP/IP protocol and a socket interface
for communication; for more information about TCP/IP, see
Chapter 21, Basic Networking (↑Administration Guide). The Linux kernel handles data it
receives or sends via the socket interface in socket buffers. These
kernel socket buffers are tunable.

TCP Autotuning

Since kernel version 2.6.17 full autotuning with 4 MB maximum buffer
size exists. This means that manual tuning in most cases will not
improve networking performance considerably. It is often the best not to
touch the following variables, or, at least, to check the outcome of
tuning efforts carefully.

If you update from an older kernel, it is recommended to remove manual
TCP tunings in favor of the autotuning feature.

The special files in the /proc file system can
modify the size and behavior of kernel socket buffers; for general
information about the /proc file system, see
Section 2.6, “The /proc File System”. Find networking related files in:

/proc/sys/net/core
/proc/sys/net/ipv4
/proc/sys/net/ipv6

General net variables are explained in the
kernel documentation
(linux/Documentation/sysctl/net.txt). Special
ipv4 variables are explained in
linux/Documentation/networking/ip-sysctl.txt and
linux/Documentation/networking/ipvs-sysctl.txt.

In the /proc file system, for example, it is
possible to either set the Maximum Socket Receive Buffer and Maximum
Socket Send Buffer for all protocols, or both these options for the TCP
protocol only (in ipv4) and thus overriding the
setting for all protocols (in core).

/proc/sys/net/ipv4/tcp_moderate_rcvbuf

If /proc/sys/net/ipv4/tcp_moderate_rcvbuf is set
to 1, autotuning is active and buffer size is
adjusted dynamically.

/proc/sys/net/ipv4/tcp_rmem

The three values setting the minimum, initial, and maximum size of the
Memory Receive Buffer per connection. They define the actual memory
usage, not just TCP window size.

/proc/sys/net/ipv4/tcp_wmem

The same as tcp_rmem, but just for Memory Send
Buffer per connection.

/proc/sys/net/core/rmem_max

Set to limit the maximum receive buffer size that applications can
request.

/proc/sys/net/core/wmem_max

Set to limit the maximum send buffer size that applications can
request.

Via /proc it is possible to disable TCP features
that you do not need (all TCP features are switched on by default). For
example, check the following files:

/proc/sys/net/ipv4/tcp_timestamps

TCP timestamps are defined in RFC1323.

/proc/sys/net/ipv4/tcp_window_scaling

TCP window scaling is also defined in RFC1323.

/proc/sys/net/ipv4/tcp_sack

Select acknowledgments (SACKS).

Use sysctl to read or write variables of the
/proc file system. sysctl is
preferable to cat (for reading) and
echo (for writing), because it also reads settings
from /etc/sysctl.conf and, thus, those settings
survive reboots reliably. With sysctl you can read all
variables and their values easily; as root use the following
command to list TCP related settings:

sysctl -a | grep tcp

Side-Effects of Tuning Network Variables

Tuning network variables can affect other system resources such as CPU
or memory use.

The Linux firewall and masquerading features are provided by the
Netfilter kernel modules. This is a highly configurable rule based
framework. If a rule matches a packet, Netfilter accepts or denies it or
takes special action (“target”) as defined by rules such as
address translation.

There are quite some properties, Netfilter is able to take into account.
Thus, the more rules are defined, the longer packet processing may last.
Also advanced connection tracking could be rather expensive and, thus,
slowing down overall networking.

When the kernel queue becomes full, all new packets are dropped, causing
existing connections to fail. The 'fail-open' feature, available since
SUSE Linux Enterprise Server 11 SP3, allows a user to temporarily disable the packet inspection
and maintain the connectivity under heavy network traffic. For reference,
see https://home.regit.org/netfilter-en/using-nfqueue-and-libnetfilter_queue/.

SUSE Linux Enterprise Server comes with a number of tools that help you obtain useful
information about your system. You can use the information for various
purposes, for example, to debug and find problems in your program, to
discover places causing performance drops, or to trace a running process
to find out what system resources it uses. The tools are mostly part of
the installation media, otherwise you can install them from the
downloadable SUSE Software Development Kit.

Tracing and Impact on Performance

While a running process is being monitored for system or library calls,
the performance of the process is heavily reduced. You are advised to use
tracing tools only for the time you need to collect the data.

The strace command traces system calls of a process
and signals received by the process. strace can either
run a new command and trace its system calls, or you can attach
strace to an already running command. Each line of the
command's output contains the system call name, followed by its arguments
in parenthesis and its return value.

To run a new command and start tracing its system calls, enter the
command to be monitored as you normally do, and add
strace at the beginning of the command line:

If you need to analyze the output of strace and the
output messages are too long to be inspected directly in the console
window, use -o. In that case, unnecessary messages, such
as information about attaching and detaching processes, are suppressed.
You can also suppress these messages (normally printed on the standard
output) with -q. To optionally prepend timestamps to
each line with a system call, use -t:

ltrace traces dynamic library calls of a process. It
is used in a similar way to strace, and most of their
parameters have a very similar or identical meaning. By default,
ltrace uses /etc/ltrace.conf or
~/.ltrace.conf configuration files. You can,
however, specify an alternative one with the -F
config_file option.

In addition to library calls, ltrace with the
-S option can trace system calls as well:

Valgrind is a set of tools to debug and profile your programs so that
they can run faster and with less errors. Valgrind can detect problems
related to memory management and threading, or can also serve as a
framework for building new debugging tools.

Valgrind is not shipped with standard SUSE Linux Enterprise Server distribution. To
install it on your system, you need to obtain SUSE Software Development Kit, and either install
it as an Add-On product and run

zypper install
valgrind

or browse through the SUSE Software Development Kit directory tree, locate the Valgrind package
and install it with

The main advantage of Valgrind is that it works with existing compiled
executables. You do not have to recompile or modify your programs to
make use of it. Run Valgrind like this:

valgrind valgrind_options your-prog
your-program-options

Valgrind consists of several tools, and each provides specific
functionality. Information in this section is general and valid
regardless of the used tool. The most important configuration option is
--tool . This option tells Valgrind which tool to run.
If you omit this option, memcheck is selected
by default. For example, if you want to run find ~
-name .bashrc with Valgrind's
memcheck tools, enter the following in the
command line:

Valgrind can read options at start-up. There are three places which
Valgrind checks:

The file .valgrindrc in the home directory of the
user who runs Valgrind.

The environment variable $VALGRIND_OPTS

The file .valgrindrc in the current directory
where Valgrind is run from.

These resources are parsed exactly in this order, while later given
options take precedence over earlier processed options. Options specific
to a particular Valgrind tool must be prefixed with the tool name and a
colon. For example, if you want cachegrind to
always write profile data to the
/tmp/cachegrind_PID.log,
add the following line to the .valgrindrc file in
your home directory:

Valgrind takes control of your executable before it starts. It reads
debugging information from the executable and related shared libraries.
The executable's code is redirected to the selected Valgrind tool, and
the tool adds its own code to handle its debugging. Then the code is
handed back to the Valgrind core and the execution continues.

For example, memcheck adds its code, which
checks every memory access. As a consequence, the program runs much
slower than in the native execution environment.

Valgrind simulates every instruction of your program. Therefore, it not
only checks the code of your program, but also all related libraries
(including the C library), libraries used for graphical environment, and
so on. If you try to detect errors with Valgrind, it also detects errors
in associated libraries (like C, X11, or Gtk libraries). Because you
probably do not need these errors, Valgrind can selectively, suppress
these error messages to suppression files. The
--gen-suppressions=yes tells Valgrind to report these
suppressions which you can copy to a file.

Note that you should supply a real executable (machine code) as an
Valgrind argument. Therefore, if your application is run, for example,
from a shell or a Perl script you will by mistake get error reports
related to /bin/sh (or
/usr/bin/perl). In such case, you can use
--trace-children=yes or,
which is better, supply a real executable to avoid any processing
confusion.

The ==6558== introduces Valgrind's messages and
contains the process ID number (PID). You can easily distinguish
Valgrind's messages from the output of the program itself, and decide
which messages belong to a particular process.

To make Valgrind's messages more detailed, use -v or
even -v -v.

Basically, you can make Valgrind send its messages to three different
places:

By default, Valgrind sends its messages to the file descriptor 2,
which is the standard error output. You can tell Valgrind to send its
messages to any other file descriptor with the
--log-fd=file_descriptor_number
option.

The second and probably more useful way is to send Valgrind's messages
to a file with
--log-file=filename. This
option accepts several variables, for example, %p
gets replaced with the PID of the currently profiled process. This way
you can send messages to different files based on their PID.
%q{env_var} is replaced with the value of the
related env_var environment variable.

The following example checks for possible memory errors during the
Apache Web server restart, while following children processes and
writing detailed Valgrind's messages to separate files distinguished
by the current process PID:

You may also prefer to send the Valgrind's messages over the network.
You need to specify the aa.bb.cc.dd IP address and
port_num port number of the network socket with the
--log-socket=aa.bb.cc.dd:port_num
option. If you omit the port number, 1500 will be used.

It is useless to send Valgrind's messages to a network socket if no
application is capable of receiving them on the remote machine. That
is why valgrind-listener, a simple listener, is
shipped together with Valgrind. It accepts connections on the
specified port and copies everything it receives to the standard
output.

Valgrind remembers all error messages, and if it detects a new error,
the error is compared against old error messages. This way Valgrind
checks for duplicate error messages. In case of a duplicate error, it is
recorded but no message is shown. This mechanism prevents you from being
overwhelmed by millions of duplicate errors.

The -v option will add a summary of all reports (sorted
by their total count) to the end of the Valgrind's execution output.
Moreover, Valgrind stops collecting errors if it detects either 1000
different errors, or 10 000 000 errors in total. If you want to suppress
this limit and wish to see all error messages, use
--error-limit=no.

Some errors usually cause other ones. Therefore, fix errors in the same
order as they appear and re-check the program continuously.

For a complete list of options related to the described tracing tools,
see the corresponding man page (man 1 strace,
man 1 ltrace, and man 1
valgrind).

To describe advanced usage of Valgrind is beyond the scope of this
document. It is very well documented, see
Valgrind
User Manual. These pages are indispensable if you need more
advanced information on Valgrind or the usage and purpose of its
standard tools.

kexec is a tool to boot to another kernel from the currently running
one. You can perform faster system reboots without any hardware
initialization. You can also prepare the system to boot to another kernel
if the system crashes.

With kexec, you can replace the running kernel with another one without
a hard reboot. The tool is useful for several reasons:

Faster system rebooting

If you need to reboot the system frequently, kexec can save you
significant time.

Avoiding unreliable firmware and hardware

Computer hardware is complex and serious problems may occur during the
system start-up. You cannot always replace unreliable hardware
immediately. kexec boots the kernel to a controlled environment with
the hardware already initialized. The risk of unsuccessful system start
is then minimized.

Saving the dump of a crashed kernel

kexec preserves the contents of the physical memory. After the
production kernel fails, the
capture kernel (an additional kernel running in a
reserved memory range) saves the state of the failed kernel. The saved
image can help you with the subsequent analysis.

Booting without GRUB or LILO configuration

When the system boots a kernel with kexec, it skips the boot loader
stage. Normal booting procedure can fail due to an error in the boot
loader configuration. With kexec, you do not depend on a working boot
loader configuration.

If you intend to use kexec on SUSE® Linux Enterprise Server to speed up reboots or
avoid potential hardware problems, you need to install the
kexec-tools package. It contains a script called
kexec-bootloader, which reads the boot loader
configuration and runs kexec with the same kernel options as the normal
boot loader does. kexec-bootloader -h gives you the list of possible options.

To set up an environment that helps you obtain useful debug information
in case of a kernel crash, you need to install
makedumpfile in addition.

The preferred method to use kdump in SUSE Linux Enterprise Server is through the
YaST kdump module. Install the package yast2-kdump
by entering zypper install yast2-kdump in the command
line as root.

The most important component of kexec is the
/sbin/kexec command. You can load a kernel with
kexec in two different ways:

kexec -lkernel_image loads the kernel to
the address space of a production kernel for a regular reboot. You can
later boot to this kernel with kexec
-e.

kexec -pkernel_image loads the kernel to
a reserved area of memory. This kernel will be booted automatically
when the system crashes.

If you want to boot another kernel and preserve the data of the
production kernel when the system crashes, you need to reserve a
dedicated area of the system memory. The production kernel never loads to
this area because it must be always available. It is used for the capture
kernel so that the memory pages of the production kernel can be
preserved. You reserve the area with crashkernel =
size@offset as a command line parameter of the
production kernel. Note that this is not a parameter of the capture
kernel. The capture kernel does not use kexec at all.

The capture kernel is loaded to the reserved area and waits for the
kernel to crash. Then kdump tries to invoke the capture kernel because
the production kernel is no longer reliable at this stage. This means
that even kdump can fail.

To load the capture kernel, you need to include the kernel boot
parameters. Usually, the initial RAM file system is used for booting. You
can specify it with --initrd =
filename. With --append =
cmdline , you append options to the command
line of the kernel to boot. It is helpful to include the command line of
the production kernel if these options are necessary for the kernel to
boot. You can simply copy the command line with --append
= "$(cat /proc/cmdline)" or add more options
with --append = "$(cat /proc/cmdline)
more_options" .

You can always unload the previously loaded kernel. To unload a kernel
that was loaded with the -l option, use the
kexec -u command. To unload a crash
kernel loaded with the -p option, use kexec
-p-u command.

Unmount all mounted file systems except the root file system with
umount -a

Unmounting Root Filesystem

Unmounting all file systems will most likely produce a device
is busy warning message. The root file system cannot be
unmounted if the system is running. Ignore the warning.

Remount the root file system in read-only mode:

mount -oremount,ro
/

Initiate the reboot of the kernel that you loaded in
Step 4 with kexec
-e

It is important to unmount the previously mounted disk volumes in
read-write mode. The reboot system call acts
immediately upon calling. Hard drive volumes mounted in read-write mode
neither synchronize nor unmount automatically. The new kernel may find
them “dirty”. Read-only disk volumes and virtual file
systems do not need to be unmounted. Refer to
/etc/mtab to determine which file systems you need
to unmount.

The new kernel previously loaded to the address space of the older kernel
rewrites it and takes control immediately. It displays the usual start-up
messages. When the new kernel boots, it skips all hardware and firmware
checks. Make sure no warning messages appear. All the file systems are
supposed to be clean if they had been unmounted.

kexec is often used for frequent reboots. For example, if it takes a
long time to run through the hardware detection routines or if the
start-up is not reliable.

Rebooting with kexec

In previous versions of SUSE® Linux Enterprise Server, you had to manually edit the
configuration file /etc/sysconfig/shutdown and the
init script /etc/init.d/halt to use kexec to
reboot the system. You no longer need to edit any system files, since
version 11 is already configured for kexec reboots.

Note that firmware as well as the boot loader are not used when the
system reboots with kexec. Any changes you make to the boot loader
configuration will be ignored until the computer performs a hard reboot.

You can use kdump to save kernel dumps. If the kernel crashes, it is
useful to copy the memory image of the crashed environment to the file
system. You can then debug the dump file to find the cause of the kernel
crash. This is called “core dump”.

kdump works similar to kexec (see Chapter 18, kexec and kdump).
The capture kernel is executed after the running production kernel
crashes. The difference is that kexec replaces the production kernel
with the capture kernel. With kdump, you still have access to the
memory space of the crashed production kernel. You can save the memory
snapshot of the crashed kernel in the environment of the kdump kernel.

Dumps over Network

In environments with limited local storage, you need to set up
kernel dumps over the network. kdump supports configuring the
specified network interface and bringing it up via
initrd. Both LAN and VLAN interfaces are
supported. You have to specify the network interface and the mode (dhcp
or static) either with YaST, or using the
KDUMP_NETCONFIG option in the
/etc/sysconfig/kdump file. The third way is to build
initrd
manually, for example with

/sbin/mkinitrd -D vlan0

for a dhcp VLAN interface, or

/sbin/mkinitrd -I eth0

for a static LAN interface.

You can either configure kdump manually or with YaST.

Target Filesystem for kdump Must Be Mounted During Configuration

When configuring kdump, you can specify a location to which the dumped
images will be saved (default: /var/crash). This
location must be mounted when configuring kdump, otherwise the
configuration will fail.

kdump reads its configuration from the
/etc/sysconfig/kdump file. To make sure that
kdump works on your system, its default configuration is sufficient.
To use kdump with the default settings,follow these steps:

Append the following kernel command line option to your boot loader
configuration, and reboot the system:

crashkernel=size@offset

You can find the corresponding values for
size and offset
in the following table:

You can edit the options in /etc/sysconfig/kdump.
Reading the comments will help you understand the meaning of
individual options.

Execute the init script once with rckdump
start, or reboot the system.

After configuring kdump with the default values, check if it works as
expected. Make sure that no users are currently logged in and no
important services are running on your system. Then follow these steps:

Switch to runlevel 1 with telinit
1

Unmount all the disk file systems except the root file system with
umount -a

Remount the root file system in read-only mode: mount
-oremount,ro /

Invoke “kernel panic” with the procfs
interface to Magic SysRq keys:

echo c
>/proc/sysrq-trigger

The Size of Kernel Dumps

The KDUMP_KEEP_OLD_DUMPS option controls the number
of preserved kernel dumps (default is 5). Without compression, the size
of the dump can take up to the size of the physical RAM memory. Make
sure you have sufficient space on the /var
partition.

The capture kernel boots and the crashed kernel memory snapshot is saved
to the file system. The save path is given by the
KDUMP_SAVEDIR option and it defaults to
/var/crash. If
KDUMP_IMMEDIATE_REBOOT is set to
yes , the system automatically reboots the production
kernel. Log in and check that the dump has been created under
/var/crash.

Screen Freezes in X11 Session

When kdump takes control and you are logged in an X11 session, the
screen will freeze without any notice. Some kdump activity can be
still visible (for example, deformed messages of a booting kernel on
the screen).

Do not reset the computer because kdump always needs some time to
complete its task.

In order to configure kdump with YaST, you need to install the
yast2-kdump package. Then either start the
Kernel Kdump module in the System
category of YaST Control Center, or enter yast2 kdump in the
command line as root.

In the Start-Up window, select Enable
Kdump. The default value for kdump memory is sufficient on
most systems.

Click Dump Filtering in the left pane, and check what
pages to include in the dump. You do not need to include the following
memory content to be able to debug kernel problems:

Pages filled with zero

Cache pages

User data pages

Free pages

In the Dump Target window, select the type of the
dump target and the URL where you want to save the dump. If you selected
a network protocol, such as FTP or SSH, you need to enter relevant
access information as well.

Fill the Email Notification window information if you
want kdump to inform you about its events via E-mail and confirm your
changes with OK after fine tuning kdump in the
Expert Settings window. kdump is now configured.

That is why the crash utility was implemented. It
analyzes crash dumps and debugs the running system as well. It provides
functionality specific to debugging the Linux kernel and is much more
suitable for advanced debugging.

If you want to debug the Linux kernel, you need to install its debugging
information package in addition. Check if the package is installed on
your system with zypper se kernel | grep debug.

Repository for Packages with Debugging Information

If you subscribed your system for online updates, you can find
“debuginfo” packages in the
*-Debuginfo-Updates online installation repository
relevant for SUSE Linux Enterprise Server11 SP3. Use YaST to enable the
repository.

To open the captured dump in crash on the machine that
produced the dump, use a command like this:

The Linux kernel comes in Executable and Linkable Format (ELF). This
file is usually called vmlinux and is directly
generated in the compilation process. Not all boot loaders, especially
on x86 (i386 and x86_64) architecture, support ELF binaries. The
following solutions exist on different architectures supported by
SUSE® Linux Enterprise Server.

The elilo boot loader, which boots the Linux
kernel on the IA64 architecture, supports loading ELF images (even
compressed ones) out of the box. The IA64 kernel package contains only
one file called vmlinuz. It is a compressed ELF
image. vmlinuz on IA64 is the same as
vmlinux.gz on x86.

The yaboot boot loader on PPC also supports
loading ELF images, but not compressed ones. In the PPC kernel package,
there is an ELF Linux kernel file vmlinux.
Considering crash, this is the easiest
architecture.

If you decide to analyze the dump on another machine, you must check
both the architecture of the computer and the files necessary for
debugging.

You can analyze the dump on another computer only if it runs a Linux
system of the same architecture. To check the compatibility, use the
command uname -i on both computers
and compare the outputs.

If you are going to analyze the dump on another computer, you also need
the appropriate files from the kernel and
kernel debug packages.

Put the kernel dump, the kernel image from
/boot, and its associated debugging info file
from /usr/lib/debug/boot into a single empty
directory.

Additionally, copy the kernel modules from
/lib/modules/$(uname -r)/kernel/ and the
associated debug info files from
/usr/lib/debug/lib/modules/$(uname -r)/kernel/
into a subdirectory named modules.

In the directory with the dump, the kernel image, its debug info
file, and the modules subdirectory, launch the
crash utility: crash vmlinux-version
vmcore.

Support for Kernel Images

Compressed kernel images (gzip, not the bzImage file) are supported by
SUSE packages of crash since SUSE® Linux Enterprise Server 11. For older versions,
you have to extract the vmlinux.gz (x86) or the
vmlinuz (IA64) to vmlinux.

Regardless of the computer on which you analyze the dump, the crash
utility will produce an output similar to this:

The command output prints first useful data: There were 42 tasks
running at the moment of the kernel crash. The cause of the crash was a
SysRq trigger invoked by the task with PID 9446. It was a Bash process
because the echo that has been used is an internal
command of the Bash shell.

The crash utility builds upon GDB and provides
many useful additional commands. If you enter bt
without any parameters, the backtrace of the task running at the moment
of the crash is printed:

Now it is clear what happened: The internal echo
command of Bash shell sent a character to
/proc/sysrq-trigger. After the corresponding
handler recognized this character, it invoked the
crash_kexec() function. This function called
panic() and kdump saved a dump.

In addition to the basic GDB commands and the extended version of
bt, the crash utility defines many other commands
related to the structure of the Linux kernel. These commands understand
the internal data structures of the Linux kernel and present their
contents in a human readable format. For example, you can list the
tasks running at the moment of the crash with ps.
With sym, you can list all the kernel symbols with
the corresponding addresses, or inquire an individual symbol for its
value. With files, you can display all the open file
descriptors of a process. With kmem, you can display
details about the kernel memory usage. With vm, you
can inspect the virtual memory of a process, even at the level of
individual page mappings. The list of useful commands is very long and
many of these accept a wide range of options.

The commands that we mentioned reflect the functionality of the common
Linux commands, such as ps and
lsof. If you would like to find out the exact
sequence of events with the debugger, you need to know how to use GDB
and to have strong debugging skills. Both of these are out of the scope
of this document. In addition, you need to understand the Linux kernel.
Several useful reference information sources are given at the end of
this document.

The configuration for kdump is stored in
/etc/sysconfig/kdump. You can also use YaST to
configure it. kdump configuration options are available under
System+Kernel
Kdump in YaST Control Center. The following kdump options
may be useful for you:

You can change the directory for the kernel dumps with the
KDUMP_SAVEDIR option. Keep in mind that the size of
kernel dumps can be very large. kdump will refuse to save the dump if
the free disk space, subtracted by the estimated dump size, drops below
the value specified by the KDUMP_FREE_DISK_SIZE option.
Note that KDUMP_SAVEDIR understands URL format
protocol://specification, where
protocol is one of file,
ftp, sftp, nfs or
cifs, and specification varies for each
protocol. For example, to save kernel dump on an FTP server, use the
following URL as a template:
ftp://username:password@ftp.example.com:123/var/crash.

Kernel dumps are usually huge and contain many pages that are not
necessary for analysis. With KDUMP_DUMPLEVEL option,
you can omit such pages. The option understands numeric value between 0
and 31. If you specify 0, the dump size will
be largest. If you specify 31, it will produce
the smallest dump. For a complete table of possible values, see the
manual page of kdump (man 7 kdump).

Sometimes it is very useful to make the size of the kernel dump smaller.
For example, if you want to transfer the dump over the network, or if you
need to save some disk space in the dump directory. This can be done with
KDUMP_DUMPFORMAT set to
compressed. The crash
utility supports dynamic decompression of the compressed dumps.

Changes to kdump Configuration File

You always need to execute rckdump restart after you
make manual changes to /etc/sysconfig/kdump.
Otherwise these changes will take effect next time you reboot the
system.

0. PREAMBLE

The purpose of this License is to make a manual, textbook, or other
functional and useful document "free" in the sense of freedom: to assure
everyone the effective freedom to copy and redistribute it, with or
without modifying it, either commercially or noncommercially. Secondarily,
this License preserves for the author and publisher a way to get credit
for their work, while not being considered responsible for modifications
made by others.

This License is a kind of "copyleft", which means that derivative works of
the document must themselves be free in the same sense. It complements the
GNU General Public License, which is a copyleft license designed for free
software.

We have designed this License in order to use it for manuals for free
software, because free software needs free documentation: a free program
should come with manuals providing the same freedoms that the software
does. But this License is not limited to software manuals; it can be used
for any textual work, regardless of subject matter or whether it is
published as a printed book. We recommend this License principally for
works whose purpose is instruction or reference.

1. APPLICABILITY AND DEFINITIONS

This License applies to any manual or other work, in any medium, that
contains a notice placed by the copyright holder saying it can be
distributed under the terms of this License. Such a notice grants a
world-wide, royalty-free license, unlimited in duration, to use that work
under the conditions stated herein. The "Document", below, refers to any
such manual or work. Any member of the public is a licensee, and is
addressed as "you". You accept the license if you copy, modify or
distribute the work in a way requiring permission under copyright law.

A "Modified Version" of the Document means any work containing the
Document or a portion of it, either copied verbatim, or with modifications
and/or translated into another language.

A "Secondary Section" is a named appendix or a front-matter section of the
Document that deals exclusively with the relationship of the publishers or
authors of the Document to the Document's overall subject (or to related
matters) and contains nothing that could fall directly within that overall
subject. (Thus, if the Document is in part a textbook of mathematics, a
Secondary Section may not explain any mathematics.) The relationship could
be a matter of historical connection with the subject or with related
matters, or of legal, commercial, philosophical, ethical or political
position regarding them.

The "Invariant Sections" are certain Secondary Sections whose titles are
designated, as being those of Invariant Sections, in the notice that says
that the Document is released under this License. If a section does not
fit the above definition of Secondary then it is not allowed to be
designated as Invariant. The Document may contain zero Invariant Sections.
If the Document does not identify any Invariant Sections then there are
none.

The "Cover Texts" are certain short passages of text that are listed, as
Front-Cover Texts or Back-Cover Texts, in the notice that says that the
Document is released under this License. A Front-Cover Text may be at most
5 words, and a Back-Cover Text may be at most 25 words.

A "Transparent" copy of the Document means a machine-readable copy,
represented in a format whose specification is available to the general
public, that is suitable for revising the document straightforwardly with
generic text editors or (for images composed of pixels) generic paint
programs or (for drawings) some widely available drawing editor, and that
is suitable for input to text formatters or for automatic translation to a
variety of formats suitable for input to text formatters. A copy made in
an otherwise Transparent file format whose markup, or absence of markup,
has been arranged to thwart or discourage subsequent modification by
readers is not Transparent. An image format is not Transparent if used for
any substantial amount of text. A copy that is not "Transparent" is called
"Opaque".

Examples of suitable formats for Transparent copies include plain ASCII
without markup, Texinfo input format, LaTeX input format, SGML or XML
using a publicly available DTD, and standard-conforming simple HTML,
PostScript or PDF designed for human modification. Examples of transparent
image formats include PNG, XCF and JPG. Opaque formats include proprietary
formats that can be read and edited only by proprietary word processors,
SGML or XML for which the DTD and/or processing tools are not generally
available, and the machine-generated HTML, PostScript or PDF produced by
some word processors for output purposes only.

The "Title Page" means, for a printed book, the title page itself, plus
such following pages as are needed to hold, legibly, the material this
License requires to appear in the title page. For works in formats which
do not have any title page as such, "Title Page" means the text near the
most prominent appearance of the work's title, preceding the beginning of
the body of the text.

A section "Entitled XYZ" means a named subunit of the Document whose title
either is precisely XYZ or contains XYZ in parentheses following text that
translates XYZ in another language. (Here XYZ stands for a specific
section name mentioned below, such as "Acknowledgements", "Dedications",
"Endorsements", or "History".) To "Preserve the Title" of such a section
when you modify the Document means that it remains a section "Entitled
XYZ" according to this definition.

The Document may include Warranty Disclaimers next to the notice which
states that this License applies to the Document. These Warranty
Disclaimers are considered to be included by reference in this License,
but only as regards disclaiming warranties: any other implication that
these Warranty Disclaimers may have is void and has no effect on the
meaning of this License.

2. VERBATIM COPYING

You may copy and distribute the Document in any medium, either
commercially or noncommercially, provided that this License, the copyright
notices, and the license notice saying this License applies to the
Document are reproduced in all copies, and that you add no other
conditions whatsoever to those of this License. You may not use technical
measures to obstruct or control the reading or further copying of the
copies you make or distribute. However, you may accept compensation in
exchange for copies. If you distribute a large enough number of copies you
must also follow the conditions in section 3.

You may also lend copies, under the same conditions stated above, and you
may publicly display copies.

3. COPYING IN QUANTITY

If you publish printed copies (or copies in media that commonly have
printed covers) of the Document, numbering more than 100, and the
Document's license notice requires Cover Texts, you must enclose the
copies in covers that carry, clearly and legibly, all these Cover Texts:
Front-Cover Texts on the front cover, and Back-Cover Texts on the back
cover. Both covers must also clearly and legibly identify you as the
publisher of these copies. The front cover must present the full title
with all words of the title equally prominent and visible. You may add
other material on the covers in addition. Copying with changes limited to
the covers, as long as they preserve the title of the Document and satisfy
these conditions, can be treated as verbatim copying in other respects.

If the required texts for either cover are too voluminous to fit legibly,
you should put the first ones listed (as many as fit reasonably) on the
actual cover, and continue the rest onto adjacent pages.

If you publish or distribute Opaque copies of the Document numbering more
than 100, you must either include a machine-readable Transparent copy
along with each Opaque copy, or state in or with each Opaque copy a
computer-network location from which the general network-using public has
access to download using public-standard network protocols a complete
Transparent copy of the Document, free of added material. If you use the
latter option, you must take reasonably prudent steps, when you begin
distribution of Opaque copies in quantity, to ensure that this Transparent
copy will remain thus accessible at the stated location until at least one
year after the last time you distribute an Opaque copy (directly or
through your agents or retailers) of that edition to the public.

It is requested, but not required, that you contact the authors of the
Document well before redistributing any large number of copies, to give
them a chance to provide you with an updated version of the Document.

4. MODIFICATIONS

You may copy and distribute a Modified Version of the Document under the
conditions of sections 2 and 3 above, provided that you release the
Modified Version under precisely this License, with the Modified Version
filling the role of the Document, thus licensing distribution and
modification of the Modified Version to whoever possesses a copy of it. In
addition, you must do these things in the Modified Version:

Use in the Title Page (and on the covers, if any) a title distinct from
that of the Document, and from those of previous versions (which should,
if there were any, be listed in the History section of the Document).
You may use the same title as a previous version if the original
publisher of that version gives permission.

List on the Title Page, as authors, one or more persons or entities
responsible for authorship of the modifications in the Modified Version,
together with at least five of the principal authors of the Document
(all of its principal authors, if it has fewer than five), unless they
release you from this requirement.

State on the Title page the name of the publisher of the Modified
Version, as the publisher.

Preserve all the copyright notices of the Document.

Add an appropriate copyright notice for your modifications adjacent to
the other copyright notices.

Include, immediately after the copyright notices, a license notice
giving the public permission to use the Modified Version under the terms
of this License, in the form shown in the Addendum below.

Preserve in that license notice the full lists of Invariant Sections and
required Cover Texts given in the Document's license notice.

Include an unaltered copy of this License.

Preserve the section Entitled "History", Preserve its Title, and add to
it an item stating at least the title, year, new authors, and publisher
of the Modified Version as given on the Title Page. If there is no
section Entitled "History" in the Document, create one stating the
title, year, authors, and publisher of the Document as given on its
Title Page, then add an item describing the Modified Version as stated
in the previous sentence.

Preserve the network location, if any, given in the Document for public
access to a Transparent copy of the Document, and likewise the network
locations given in the Document for previous versions it was based on.
These may be placed in the "History" section. You may omit a network
location for a work that was published at least four years before the
Document itself, or if the original publisher of the version it refers
to gives permission.

For any section Entitled "Acknowledgements" or "Dedications", Preserve
the Title of the section, and preserve in the section all the substance
and tone of each of the contributor acknowledgements and/or dedications
given therein.

Preserve all the Invariant Sections of the Document, unaltered in their
text and in their titles. Section numbers or the equivalent are not
considered part of the section titles.

Delete any section Entitled "Endorsements". Such a section may not be
included in the Modified Version.

Do not retitle any existing section to be Entitled "Endorsements" or to
conflict in title with any Invariant Section.

Preserve any Warranty Disclaimers.

If the Modified Version includes new front-matter sections or appendices
that qualify as Secondary Sections and contain no material copied from the
Document, you may at your option designate some or all of these sections
as invariant. To do this, add their titles to the list of Invariant
Sections in the Modified Version's license notice. These titles must be
distinct from any other section titles.

You may add a section Entitled "Endorsements", provided it contains
nothing but endorsements of your Modified Version by various parties--for
example, statements of peer review or that the text has been approved by
an organization as the authoritative definition of a standard.

You may add a passage of up to five words as a Front-Cover Text, and a
passage of up to 25 words as a Back-Cover Text, to the end of the list of
Cover Texts in the Modified Version. Only one passage of Front-Cover Text
and one of Back-Cover Text may be added by (or through arrangements made
by) any one entity. If the Document already includes a cover text for the
same cover, previously added by you or by arrangement made by the same
entity you are acting on behalf of, you may not add another; but you may
replace the old one, on explicit permission from the previous publisher
that added the old one.

The author(s) and publisher(s) of the Document do not by this License give
permission to use their names for publicity for or to assert or imply
endorsement of any Modified Version.

5. COMBINING DOCUMENTS

You may combine the Document with other documents released under this
License, under the terms defined in section 4 above for modified versions,
provided that you include in the combination all of the Invariant Sections
of all of the original documents, unmodified, and list them all as
Invariant Sections of your combined work in its license notice, and that
you preserve all their Warranty Disclaimers.

The combined work need only contain one copy of this License, and multiple
identical Invariant Sections may be replaced with a single copy. If there
are multiple Invariant Sections with the same name but different contents,
make the title of each such section unique by adding at the end of it, in
parentheses, the name of the original author or publisher of that section
if known, or else a unique number. Make the same adjustment to the section
titles in the list of Invariant Sections in the license notice of the
combined work.

In the combination, you must combine any sections Entitled "History" in
the various original documents, forming one section Entitled "History";
likewise combine any sections Entitled "Acknowledgements", and any
sections Entitled "Dedications". You must delete all sections Entitled
"Endorsements".

6. COLLECTIONS OF DOCUMENTS

You may make a collection consisting of the Document and other documents
released under this License, and replace the individual copies of this
License in the various documents with a single copy that is included in
the collection, provided that you follow the rules of this License for
verbatim copying of each of the documents in all other respects.

You may extract a single document from such a collection, and distribute
it individually under this License, provided you insert a copy of this
License into the extracted document, and follow this License in all other
respects regarding verbatim copying of that document.

7. AGGREGATION WITH INDEPENDENT WORKS

A compilation of the Document or its derivatives with other separate and
independent documents or works, in or on a volume of a storage or
distribution medium, is called an "aggregate" if the copyright resulting
from the compilation is not used to limit the legal rights of the
compilation's users beyond what the individual works permit. When the
Document is included in an aggregate, this License does not apply to the
other works in the aggregate which are not themselves derivative works of
the Document.

If the Cover Text requirement of section 3 is applicable to these copies
of the Document, then if the Document is less than one half of the entire
aggregate, the Document's Cover Texts may be placed on covers that bracket
the Document within the aggregate, or the electronic equivalent of covers
if the Document is in electronic form. Otherwise they must appear on
printed covers that bracket the whole aggregate.

8. TRANSLATION

Translation is considered a kind of modification, so you may distribute
translations of the Document under the terms of section 4. Replacing
Invariant Sections with translations requires special permission from
their copyright holders, but you may include translations of some or all
Invariant Sections in addition to the original versions of these Invariant
Sections. You may include a translation of this License, and all the
license notices in the Document, and any Warranty Disclaimers, provided
that you also include the original English version of this License and the
original versions of those notices and disclaimers. In case of a
disagreement between the translation and the original version of this
License or a notice or disclaimer, the original version will prevail.

If a section in the Document is Entitled "Acknowledgements",
"Dedications", or "History", the requirement (section 4) to Preserve its
Title (section 1) will typically require changing the actual title.

9. TERMINATION

You may not copy, modify, sublicense, or distribute the Document except as
expressly provided for under this License. Any other attempt to copy,
modify, sublicense or distribute the Document is void, and will
automatically terminate your rights under this License. However, parties
who have received copies, or rights, from you under this License will not
have their licenses terminated so long as such parties remain in full
compliance.

10. FUTURE REVISIONS OF THIS LICENSE

The Free Software Foundation may publish new, revised versions of the GNU
Free Documentation License from time to time. Such new versions will be
similar in spirit to the present version, but may differ in detail to
address new problems or concerns. See
http://www.gnu.org/copyleft/.

Each version of the License is given a distinguishing version number. If
the Document specifies that a particular numbered version of this License
"or any later version" applies to it, you have the option of following the
terms and conditions either of that specified version or of any later
version that has been published (not as a draft) by the Free Software
Foundation. If the Document does not specify a version number of this
License, you may choose any version ever published (not as a draft) by the
Free Software Foundation.

ADDENDUM: How to use this License for your documents

Copyright (c) YEAR YOUR NAME.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.2
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license is included in the section entitled “GNU
Free Documentation License”.

If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts,
replace the “with...Texts.” line with this:

with the Invariant Sections being LIST THEIR TITLES, with the
Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.

If you have Invariant Sections without Cover Texts, or some other
combination of the three, merge those two alternatives to suit the
situation.

If your document contains nontrivial examples of program code, we
recommend releasing these examples in parallel under your choice of free
software license, such as the GNU General Public License, to permit their
use in free software.