Instrumentation overview

uses __builtin_return_address(0) to record the address of the caller, the same mechanism used by kmem events

starts from very first allocation

Ftrace kmem events

does not start until ftrace system is initialized, after some allocations are already performed

supported in mainline - no need to add our own instrumentation

These two instrumentation methods are basically the same: trap each kmalloc, kfree, etc. event and produce relevant information with them. The difference between them is that the first post-processes the events in-kernel and create a /proc/slab_account file to access the results. This output is more or less like this:

On the other hand, analysing ftrace kmem events will defer post-processing to be done at user space, thus achieving much more flexibility. A typical trace log would be like this:

TODO

The disadvantage of the ftrace method is that it needs to be initialized before capturing events. Currently, this initialization is done at fs_initcall and we're working on enabling them earlier.
For more information, checkout this upstreamed patch:

trace: Move trace event enable from fs_initcall to core_initcall

This patch allows to enable events at core_initcall. It's also possible to enable it at early_initcall.
Another posibility is to create a static ring buffer and then copy the captured events into the real ring buffer.

Also, we must find out if early allocations account for significant memory usage. If not, it may not be that important to capture them. Yet another possibility is to use a printk brute-force approach for very early allocations, and somehow coalesce the data into the final report.

Using debugfs and ftrace

For more information, please refer to the canonical trace documentation at the linux tree:

Documentation/trace/ftrace.txt

Documentation/trace/tracepoint-analysis.txt

and everything else inside Documentation/trace/

(Actually, some of this information has been copied from there.)

Debugfs

The debug filesystem it's a ram-based filesystem that can be used to output a lot of different debugging information. This filesystem is called debugfs and can be enabled with CONFIG_DEBUG_FS:

Kernel hacking
[*] Debug filesystem

After you enable this option and boot the built kernel, it creates the directory /sys/kernel/debug as a location for the user to mount the debugfs filesystem. Do this manually:

$ mount -t debugfs none /sys/kernel/debug

You can add a link to type less and get less tired:

$ ln -s /debug /sys/kernel/debug

Tracing

Once we have enabled debugfs, we need to enable tracing support. This is done with CONFIG_TRACING option, this option will add a /sys/kernel/debug/tracing directory on your mounted debugfs filesystem. Traced events can be read through debug/tracing/trace.

To dynamically enable trace events you need to enable CONFIG_FOO.
Once it is enabled you can see the available events by listing TODO.

TODO
TODO
TODO
TODO

To enable events on bootup you can add them to kernel parameters, for instance to enable kmem events:

Warning: if you use SLOB on non-NUMA systems, where you might expect kmalloc_node not get called, actually it is the only one called. This is due to SLOB implementing only kmalloc_node and having kmalloc call it without a node. Same goes to kem_cache_alloc_node.

Of course, this option makes a bit smaller and slower kernel,
but this is an expected side-effect on a debug-only kernel.

We must keep in mind that no matter what internal mechanisms we use to record call_site,
if they're based on __builtin_address, then their accuracy will depend entirely on
gcc *not* inlining automatically.

The enfasis is in the automatic part. There will be lots of functions we will
need to get inlined in order to determine the caller correctly.
These will be marked as __always_inline.

See upstreamed patch:

Makefile: Add option CONFIG_DISABLE_GCC_AUTOMATIC_INLINING)

Memory accounting

Types of memory

The kernel, being a computer program, can consume memory in two different ways:
statically and dynamically.

Static memory can be measured offline, therefore accounted before actually running the kernel
using standard binary inspection utilites (readelf, objdump, size, etc).
We will explore this utilities in detail.

Dynamic memory cannot be measured offline, and it's not only necesarry to probe a running kernel
but also to enable aditional probe code to trace each allocation.
Fortunately for us, the linux kernel has ftrace which is a tracing framework that allows to trace general events, and in particular memory allocation events.
We will explore this framework in detail.

Once this code is running each of these symbols will need memory for its own storage. However, the zero initialized variable will not use space in the compiled binary. This is due to a special section inside the binary (called bss for no good reason) where all the zero initialized variables are placed. Since they carry no information, they need no space. Static variables have the same life as the executing program.

On the other side, var and i variables are dynamically allocated, since they live in the stack. They are called automatic variables, meaning that they have a life cycle that's not under our control.

Note that when we talk about static memory, the word static has nothing to do with the C-language keyword. This keyword references a visibility class, where static means local, as opposed to global.

The size command

The most simple command to get a binary static size, is the wonderfully called size command.
Let's start by seeing it in action:

According to this output, this object file has roughly 50k bytes of text and 68 bytes of data.
Now, size comes in two flavors: berkeley and sysv. Each of this shows a different output.
The default is berkeley, so the previous example was a berkeley output.

However, if we use the same command with sysv output format, we'll find very different results:

Here we see a more detailed description about each section size. Note the appearence of a .rodata (read-only data) section, of 2k byte large. This section is composed of read-only variables (e.g. marked const) that are not accounted by the standard size format.

We can conclude that standard size format gives an incomplete picture of the compiled object.

To add even more confusion to this picture, gcc can decide (at his own will) to put inside .rodata section symbols not marked as const. These symbols are not written by anyone, and gcc considers them as read-only (pretty smart, uh?). This means you can have a .rodata section bigger than what you expected to have.

This happens since gcc v4.7 (???)

readelf

This two commands can give us any information we need about a binary. In particular, they can output the complete list of symbols with detailed information about each one. Let's see an example of readelf on the same file we used for size. The output is stripped for more clearance.

For instance, ext2_nobh_aops is an OBJECT symbol (data) of 76 bytes and ext2_evict_inode is a FUNC symbol (text) of 286 bytes.
Notice there are some UND symbols. They are undefined symbols for this file, that are defined elsewhere and therefore not of interest for us when inspecting a file size.

Of course, this output can be combined with grep to get fantastic results.
Let's count the numbers of defined functions:

$ readelf -s fs/ext2/ext2.o | grep -v UND | grep FUNC

With a little awk magic we could even sum these sizes and get the size of the .text section. TODO!

objdump

TODO

Dynamic

Dynamic memory in kernel land is a little different from user land.

In user land, all one needs to do to get a chunk of memory is call malloc().
In kernel land, we have a similar function: kmalloc(). But we also have lots
of other functions to alloc memory, and we must have some special considerations.

The first thing it's important to understand is that kernel obtains memory
(well, in most architectures) on a fixed-size chunk, that we call a 'page' of memory.
This page of memory tipically is 4096 bytes large, but this depends on the architecture.

In order to delivery smaller pieces of memory, the kernel have a few couple of layers
that ultimately lets you do kmalloc(100) and get 100 bytes. These layers are called:
buddy allocator and slab allocator.

We will focus on the latter. Slab allocator comes in three different flavors:
SLAB, SLOB and SLUB. These funny names are historically, but the meaning is:

SLAB is the traditional

SLOB is aimed at tiny embedded systesm (e.g. without mmu)

SLUB is the default

Each of these implement the allocation in a different way, but they all share a common
property: internal fragmentation.

Internal fragmentation

For different reasons (alignment, overhead, etc) when we request 100 bytes with kmalloc(100)
the slab allocator may really allocate 128 bytes (or 140 bytes, we can't really know).
These extra 28 bytes can't be used, and therefore you are wasting them.
This is called internal fragmentation, and one of the main goals of the slab allocator is
to minimize it. In other words, trying to match as nearly as possible the requested size
with the truly allocated size.

Accounting with kmem events trace

Ftrace kmem events are a great source of information. By using them you can trace each kmalloc,
getting the requested bytes, the allocated bytes, the caller address and the returned pointer.
You can also trace kfree, getting the caller address and the freed pointer.

Once you have the caller address you can use System.map file to get the caller function name.
Also, by using the returned pointer and correlating with kfree traces you can keep track
of currently used dynamic memory by each kernel function / subsystem.

Let's see this in detail.

Enabling and reading kmem trace

We can activate this on boot up with kernel parameter trace_event. For instance,