Manual Notes

For the latest version of the Yocto Project Profiling
and Tracing Manual associated with this Yocto Project
release (version 2.3.1),
see the Yocto Project Profiling and Tracing Manual
from the
Yocto Project documentation page.

This version of the manual is version
2.3.1.
For later releases of the Yocto Project (if they exist),
go to the
Yocto Project documentation page
and use the drop-down "Active Releases" button
and choose the Yocto Project version for which you want
the manual.

Yocto bundles a number of tracing and profiling tools - this 'HOWTO'
describes their basic usage and shows by example how to make use
of them to examine application and system behavior.

The tools presented are for the most part completely open-ended and
have quite good and/or extensive documentation of their own which
can be used to solve just about any problem you might come across
in Linux.
Each section that describes a particular tool has links to that
tool's documentation and website.

The purpose of this 'HOWTO' is to present a set of common and
generally useful tracing and profiling idioms along with their
application (as appropriate) to each tool, in the context of a
general-purpose 'drill-down' methodology that can be applied
to solving a large number (90%?) of problems.
For help with more advanced usages and problems, please see
the documentation and/or websites listed for each tool.

The final section of this 'HOWTO' is a collection of real-world
examples which we'll be continually adding to as we solve more
problems using the tools - feel free to add your own examples
to the list!

Most of the tools are available only in 'sdk' images or in images
built after adding 'tools-profile' to your local.conf.
So, in order to be able to access all of the tools described here,
please first build and boot an 'sdk' image e.g.

$ bitbake core-image-sato-sdk

or alternatively by adding 'tools-profile' to the
EXTRA_IMAGE_FEATURES line in your local.conf:

EXTRA_IMAGE_FEATURES = "debug-tweaks tools-profile"

If you use the 'tools-profile' method, you don't need to build an
sdk image - the tracing and profiling tools will be included in
non-sdk images as well e.g.:

$ bitbake core-image-sato

Note

By default, the Yocto build system strips symbols from the
binaries it packages, which makes it difficult to use some
of the tools.

You can prevent that by setting the
INHIBIT_PACKAGE_STRIP
variable to "1" in your
local.conf when you build the image:

INHIBIT_PACKAGE_STRIP = "1"

The above setting will noticeably increase the size of your image.

If you've already built a stripped image, you can generate
debug packages (xxx-dbg) which you can manually install as
needed.

To generate debug info for packages, you can add dbg-pkgs to
EXTRA_IMAGE_FEATURES in local.conf. For example:

EXTRA_IMAGE_FEATURES = "debug-tweaks tools-profile dbg-pkgs"

Additionally, in order to generate the right type of
debuginfo, we also need to add the following to local.conf:

It may seem surprising to see a section covering an 'overall architecture'
for what seems to be a random collection of tracing tools that together
make up the Linux tracing and profiling space.
The fact is, however, that in recent years this seemingly disparate
set of tools has started to converge on a 'core' set of underlying
mechanisms:

static tracepoints

dynamic tracepoints

kprobes

uprobes

the perf_events subsystem

debugfs

Tying it Together: Rather than enumerating here how each tool makes use of
these common mechanisms, textboxes like this will make note of the
specific usages in each tool as they come up in the course
of the text.

Chapter 3. Basic Usage (with examples) for each of the Yocto Tracing Tools¶

The 'perf' tool is the profiling and tracing tool that comes
bundled with the Linux kernel.

Don't let the fact that it's part of the kernel fool you into thinking
that it's only for tracing and profiling the kernel - you can indeed
use it to trace and profile just the kernel, but you can also use it
to profile specific applications separately (with or without kernel
context), and you can also use it to trace and profile the kernel
and all applications on the system simultaneously to gain a system-wide
view of what's going on.

In many ways, perf aims to be a superset of all the tracing and profiling
tools available in Linux today, including all the other tools covered
in this HOWTO. The past couple of years have seen perf subsume a lot
of the functionality of those other tools and, at the same time, those
other tools have removed large portions of their previous functionality
and replaced it with calls to the equivalent functionality now
implemented by the perf subsystem. Extrapolation suggests that at
some point those other tools will simply become completely redundant
and go away; until then, we'll cover those other tools in these pages
and in many cases show how the same things can be accomplished in
perf and the other tools when it seems useful to do so.

The coverage below details some of the most common ways you'll likely
want to apply the tool; full documentation can be found either within
the tool itself or in the man pages at
perf(1).

perf runs on the target system for the most part. You can archive
profile data and copy it to the host for analysis, but for the
rest of this document we assume you've ssh'ed to the host and
will be running the perf commands on the target.

As a simple test case, we'll profile the 'wget' of a fairly large
file, which is a minimally interesting case because it has both
file and network I/O aspects, and at least in the case of standard
Yocto images, it's implemented as part of busybox, so the methods
we use to analyze it can be used in a very similar way to the whole
host of supported busybox applets in Yocto.

The quickest and easiest way to get some basic overall data about
what's going on for a particular workload is to profile it using
'perf stat'. 'perf stat' basically profiles using a few default
counters and displays the summed counts at the end of the run:

Many times such a simple-minded test doesn't yield much of
interest, but sometimes it does (see Real-world Yocto bug
(slow loop-mounted write speed)).

Also, note that 'perf stat' isn't restricted to a fixed set of
counters - basically any event listed in the output of 'perf list'
can be tallied by 'perf stat'. For example, suppose we wanted to
see a summary of all the events related to kernel memory
allocation/freeing along with cache hits and misses:

So 'perf stat' gives us a nice easy way to get a quick overview of
what might be happening for a set of events, but normally we'd
need a little more detail in order to understand what's going on
in a way that we can act on in a useful way.

To dive down into a next level of detail, we can use 'perf
record'/'perf report' which will collect profiling data and
present it to use using an interactive text-based UI (or
simply as text if we specify --stdio to 'perf report').

As our first attempt at profiling this workload, we'll simply
run 'perf record', handing it the workload we want to profile
(everything after 'perf record' and any perf options we hand
it - here none - will be executed in a new shell). perf collects
samples until the process exits and records them in a file named
'perf.data' in the current working directory.

To see the results in a 'text-based UI' (tui), simply run
'perf report', which will read the perf.data file in the current
working directory and display the results in an interactive UI:

root@crownbay:~# perf report

The above screenshot displays a 'flat' profile, one entry for
each 'bucket' corresponding to the functions that were profiled
during the profiling run, ordered from the most popular to the
least (perf has options to sort in various orders and keys as
well as display entries only above a certain threshold and so
on - see the perf documentation for details). Note that this
includes both userspace functions (entries containing a [.]) and
kernel functions accounted to the process (entries containing
a [k]). (perf has command-line modifiers that can be used to
restrict the profiling to kernel or userspace, among others).

Notice also that the above report shows an entry for 'busybox',
which is the executable that implements 'wget' in Yocto, but that
instead of a useful function name in that entry, it displays
a not-so-friendly hex value instead. The steps below will show
how to fix that problem.

Before we do that, however, let's try running a different profile,
one which shows something a little more interesting. The only
difference between the new profile and the previous one is that
we'll add the -g option, which will record not just the address
of a sampled function, but the entire callchain to the sampled
function as well:

Using the callgraph view, we can actually see not only which
functions took the most time, but we can also see a summary of
how those functions were called and learn something about how the
program interacts with the kernel in the process.

Notice that each entry in the above screenshot now contains a '+'
on the left-hand side. This means that we can expand the entry and
drill down into the callchains that feed into that entry.
Pressing 'enter' on any one of them will expand the callchain
(you can also press 'E' to expand them all at the same time or 'C'
to collapse them all).

In the screenshot above, we've toggled the __copy_to_user_ll()
entry and several subnodes all the way down. This lets us see
which callchains contributed to the profiled __copy_to_user_ll()
function which contributed 1.77% to the total profile.

As a bit of background explanation for these callchains, think
about what happens at a high level when you run wget to get a file
out on the network. Basically what happens is that the data comes
into the kernel via the network connection (socket) and is passed
to the userspace program 'wget' (which is actually a part of
busybox, but that's not important for now), which takes the buffers
the kernel passes to it and writes it to a disk file to save it.

The part of this process that we're looking at in the above call
stacks is the part where the kernel passes the data it's read from
the socket down to wget i.e. a copy-to-user.

Notice also that here there's also a case where the hex value
is displayed in the callstack, here in the expanded
sys_clock_gettime() function. Later we'll see it resolve to a
userspace function call in busybox.

The above screenshot shows the other half of the journey for the
data - from the wget program's userspace buffers to disk. To get
the buffers to disk, the wget program issues a write(2), which
does a copy-from-user to the kernel, which then takes care via
some circuitous path (probably also present somewhere in the
profile data), to get it safely to disk.

Now that we've seen the basic layout of the profile data and the
basics of how to extract useful information out of it, let's get
back to the task at hand and see if we can get some basic idea
about where the time is spent in the program we're profiling,
wget. Remember that wget is actually implemented as an applet
in busybox, so while the process name is 'wget', the executable
we're actually interested in is busybox. So let's expand the
first entry containing busybox:

Again, before we expanded we saw that the function was labeled
with a hex value instead of a symbol as with most of the kernel
entries. Expanding the busybox entry doesn't make it any better.

The problem is that perf can't find the symbol information for the
busybox binary, which is actually stripped out by the Yocto build
system.

One way around that is to put the following in your
local.conf file when you build the image:

However, we already have an image with the binaries stripped,
so what can we do to get perf to resolve the symbols? Basically
we need to install the debuginfo for the busybox package.

To generate the debug info for the packages in the image, we can
add dbg-pkgs to EXTRA_IMAGE_FEATURES in local.conf. For example:

EXTRA_IMAGE_FEATURES = "debug-tweaks tools-profile dbg-pkgs"

Additionally, in order to generate the type of debuginfo that
perf understands, we also need to add the following to local.conf:

PACKAGE_DEBUG_SPLIT_STYLE = 'debug-file-directory'

Once we've done that, we can install the debuginfo for busybox.
The debug packages once built can be found in
build/tmp/deploy/rpm/* on the host system. Find the
busybox-dbg-...rpm file and copy it to the target. For example:

Now that the debuginfo is installed, we see that the busybox
entries now display their functions symbolically:

If we expand one of the entries and press 'enter' on a leaf node,
we're presented with a menu of actions we can take to get more
information related to that entry:

One of these actions allows us to show a view that displays a
busybox-centric view of the profiled functions (in this case we've
also expanded all the nodes using the 'E' key):

Finally, we can see that now that the busybox debuginfo is
installed, the previously unresolved symbol in the
sys_clock_gettime() entry mentioned previously is now resolved,
and shows that the sys_clock_gettime system call that was the
source of 6.75% of the copy-to-user overhead was initiated by
the handle_input() busybox function:

At the lowest level of detail, we can dive down to the assembly
level and see which instructions caused the most overhead in a
function. Pressing 'enter' on the 'udhcpc_main' function, we're
again presented with a menu:

Selecting 'Annotate udhcpc_main', we get a detailed listing of
percentages by instruction for the udhcpc_main function. From the
display, we can see that over 50% of the time spent in this
function is taken up by a couple tests and the move of a
constant (1) to a register:

As a segue into tracing, let's try another profile using a
different counter, something other than the default 'cycles'.

The tracing and profiling infrastructure in Linux has become
unified in a way that allows us to use the same tool with a
completely different set of counters, not just the standard
hardware counters that traditional tools have had to restrict
themselves to (of course the traditional tools can also make use
of the expanded possibilities now available to them, and in some
cases have, as mentioned previously).

We can get a list of the available events that can be used to
profile a workload via 'perf list':

Tying it Together: These are exactly the same set of events defined
by the trace event subsystem and exposed by
ftrace/tracecmd/kernelshark as files in
/sys/kernel/debug/tracing/events, by SystemTap as
kernel.trace("tracepoint_name") and (partially) accessed by LTTng.

Only a subset of these would be of interest to us when looking at
this workload, so let's choose the most likely subsystems
(identified by the string before the colon in the Tracepoint events)
and do a 'perf stat' run using only those wildcarded subsystems:

The screenshot above shows the results of running a profile using
sched:sched_switch tracepoint, which shows the relative costs of
various paths to sched_wakeup (note that sched_wakeup is the
name of the tracepoint - it's actually defined just inside
ttwu_do_wakeup(), which accounts for the function name actually
displayed in the profile:

A couple of the more interesting callchains are expanded and
displayed above, basically some network receive paths that
presumably end up waking up wget (busybox) when network data is
ready.

Note that because tracepoints are normally used for tracing,
the default sampling period for tracepoints is 1 i.e. for
tracepoints perf will sample on every event occurrence (this
can be changed using the -c option). This is in contrast to
hardware counters such as for example the default 'cycles'
hardware counter used for normal profiling, where sampling
periods are much higher (in the thousands) because profiling should
have as low an overhead as possible and sampling on every cycle
would be prohibitively expensive.

Profiling is a great tool for solving many problems or for
getting a high-level view of what's going on with a workload or
across the system. It is however by definition an approximation,
as suggested by the most prominent word associated with it,
'sampling'. On the one hand, it allows a representative picture of
what's going on in the system to be cheaply taken, but on the other
hand, that cheapness limits its utility when that data suggests a
need to 'dive down' more deeply to discover what's really going
on. In such cases, the only way to see what's really going on is
to be able to look at (or summarize more intelligently) the
individual steps that go into the higher-level behavior exposed
by the coarse-grained profiling data.

As a concrete example, we can trace all the events we think might
be applicable to our workload:

This gives us a detailed timestamped sequence of events that
occurred within the workload with respect to those events.

In many ways, profiling can be viewed as a subset of tracing -
theoretically, if you have a set of trace events that's sufficient
to capture all the important aspects of a workload, you can derive
any of the results or views that a profiling run can.

Another aspect of traditional profiling is that while powerful in
many ways, it's limited by the granularity of the underlying data.
Profiling tools offer various ways of sorting and presenting the
sample data, which make it much more useful and amenable to user
experimentation, but in the end it can't be used in an open-ended
way to extract data that just isn't present as a consequence of
the fact that conceptually, most of it has been thrown away.

Full-blown detailed tracing data does however offer the opportunity
to manipulate and present the information collected during a
tracing run in an infinite variety of ways.

Another way to look at it is that there are only so many ways that
the 'primitive' counters can be used on their own to generate
interesting output; to get anything more complicated than simple
counts requires some amount of additional logic, which is typically
very specific to the problem at hand. For example, if we wanted to
make use of a 'counter' that maps to the value of the time
difference between when a process was scheduled to run on a
processor and the time it actually ran, we wouldn't expect such
a counter to exist on its own, but we could derive one called say
'wakeup_latency' and use it to extract a useful view of that metric
from trace data. Likewise, we really can't figure out from standard
profiling tools how much data every process on the system reads and
writes, along with how many of those reads and writes fail
completely. If we have sufficient trace data, however, we could
with the right tools easily extract and present that information,
but we'd need something other than pre-canned profiling tools to
do that.

Luckily, there is a general-purpose way to handle such needs,
called 'programming languages'. Making programming languages
easily available to apply to such problems given the specific
format of data is called a 'programming language binding' for
that data and language. Perf supports two programming language
bindings, one for Python and one for Perl.

Tying it Together: Language bindings for manipulating and
aggregating trace data are of course not a new
idea. One of the first projects to do this was IBM's DProbes
dpcc compiler, an ANSI C compiler which targeted a low-level
assembly language running on an in-kernel interpreter on the
target system. This is exactly analogous to what Sun's DTrace
did, except that DTrace invented its own language for the purpose.
Systemtap, heavily inspired by DTrace, also created its own
one-off language, but rather than running the product on an
in-kernel interpreter, created an elaborate compiler-based
machinery to translate its language into kernel modules written
in C.

Now that we have the trace data in perf.data, we can use
'perf script -g' to generate a skeleton script with handlers
for the read/write entry/exit events we recorded:

That in itself isn't very useful; after all, we can accomplish
pretty much the same thing by simply running 'perf script'
without arguments in the same directory as the perf.data file.

We can however replace the print statements in the generated
function bodies with whatever we want, and thereby make it
infinitely more useful.

As a simple example, let's just replace the print statements in
the function bodies with a simple function that does nothing but
increment a per-event count. When the program is run against a
perf.data file, each time a particular event is encountered,
a tally is incremented for that event. For example:

Each event handler function in the generated code is modified
to do this. For convenience, we define a common function called
inc_counts() that each handler calls; inc_counts() simply tallies
a count for each event using the 'counts' hash, which is a
specialized hash function that does Perl-like autovivification, a
capability that's extremely useful for kinds of multi-level
aggregation commonly used in processing traces (see perf's
documentation on the Python language binding for details):

Note that this is pretty much exactly the same information we get
from 'perf stat', which goes a little way to support the idea
mentioned previously that given the right kind of trace data,
higher-level profiling-type summaries can be derived from it.

Here we see entries not only for our wget load, but for other
processes running on the system as well:

In the snapshot above, we can see callchains that originate in
libc, and a callchain from Xorg that demonstrates that we're
using a proprietary X driver in userspace (notice the presence
of 'PVR' and some other unresolvable symbols in the expanded
Xorg callchain).

Note also that we have both kernel and userspace entries in the
above snapshot. We can also tell perf to focus on userspace but
providing a modifier, in this case 'u', to the 'cycles' hardware
counter when we record a profile:

Finally, we can press 'enter' on a leaf node and select the 'Zoom
into DSO' menu item to show only entries associated with a
specific DSO. In the screenshot below, we've zoomed into the
'libc' DSO which shows all the entries associated with the
libc-xxx.so DSO.

We can also use the system-wide -a switch to do system-wide
tracing. Here we'll trace a couple of scheduler events:

Notice that there are a lot of events that don't really have
anything to do with what we're interested in, namely events
that schedule 'perf' itself in and out or that wake perf up.
We can get rid of those by using the '--filter' option -
for each event we specify using -e, we can add a --filter
after that to filter out trace events that contain fields
with specific values:

In this case, we've filtered out all events that have 'perf'
in their 'comm' or 'comm_prev' or 'comm_next' fields. Notice
that there are still events recorded for perf, but notice
that those events don't have values of 'perf' for the filtered
fields. To completely filter out anything from perf will
require a bit more work, but for the purpose of demonstrating
how to use filters, it's close enough.

Tying it Together: These are exactly the same set of event
filters defined by the trace event subsystem. See the
ftrace/tracecmd/kernelshark section for more discussion about
these event filters.

Tying it Together: These event filters are implemented by a
special-purpose pseudo-interpreter in the kernel and are an
integral and indispensable part of the perf design as it
relates to tracing. kernel-based event filters provide a
mechanism to precisely throttle the event stream that appears
in user space, where it makes sense to provide bindings to real
programming languages for postprocessing the event stream.
This architecture allows for the intelligent and flexible
partitioning of processing between the kernel and user space.
Contrast this with other tools such as SystemTap, which does
all of its processing in the kernel and as such requires a
special project-defined language in order to accommodate that
design, or LTTng, where everything is sent to userspace and
as such requires a super-efficient kernel-to-userspace
transport mechanism in order to function properly. While
perf certainly can benefit from for instance advances in
the design of the transport, it doesn't fundamentally depend
on them. Basically, if you find that your perf tracing
application is causing buffer I/O overruns, it probably
means that you aren't taking enough advantage of the
kernel filtering engine.

perf isn't restricted to the fixed set of static tracepoints
listed by 'perf list'. Users can also add their own 'dynamic'
tracepoints anywhere in the kernel. For instance, suppose we
want to define our own tracepoint on do_fork(). We can do that
using the 'perf probe' perf subcommand:

root@crownbay:~# perf probe do_fork
Added new event:
probe:do_fork (on do_fork)
You can now use it in all perf tools, such as:
perf record -e probe:do_fork -aR sleep 1

Adding a new tracepoint via 'perf probe' results in an event
with all the expected files and format in
/sys/kernel/debug/tracing/events, just the same as for static
tracepoints (as discussed in more detail in the trace events
subsystem section:

And using 'perf report' on the same file, we can see the
callgraphs from starting a few programs during those 30 seconds:

Tying it Together: The trace events subsystem accommodate static
and dynamic tracepoints in exactly the same way - there's no
difference as far as the infrastructure is concerned. See the
ftrace section for more details on the trace event subsystem.

Tying it Together: Dynamic tracepoints are implemented under the
covers by kprobes and uprobes. kprobes and uprobes are also used
by and in fact are the main focus of SystemTap.

For this section, we'll assume you've already performed the basic
setup outlined in the General Setup section.

ftrace, trace-cmd, and kernelshark run on the target system,
and are ready to go out-of-the-box - no additional setup is
necessary. For the rest of this section we assume you've ssh'ed
to the host and will be running ftrace on the target. kernelshark
is a GUI application and if you use the '-X' option to ssh you
can have the kernelshark GUI run on the target but display
remotely on the host if you want.

'ftrace' essentially refers to everything included in
the /tracing directory of the mounted debugfs filesystem
(Yocto follows the standard convention and mounts it
at /sys/kernel/debug). Here's a listing of all the files
found in /sys/kernel/debug/tracing on a Yocto system:

The files listed above are used for various purposes -
some relate directly to the tracers themselves, others are
used to set tracing options, and yet others actually contain
the tracing output when a tracer is in effect. Some of the
functions can be guessed from their names, others need
explanation; in any case, we'll cover some of the files we
see here below but for an explanation of the others, please
see the ftrace documentation.

We'll start by looking at some of the available built-in
tracers.

cat'ing the 'available_tracers' file lists the set of
available tracers:

The above sets the current tracer to be the
'function tracer'. This tracer traces every function
call in the kernel and makes it available as the
contents of the 'trace' file. Reading the 'trace' file
lists the currently buffered function calls that have been
traced by the function tracer:

Each line in the trace above shows what was happening in
the kernel on a given cpu, to the level of detail of
function calls. Each entry shows the function called,
followed by its caller (after the arrow).

The function tracer gives you an extremely detailed idea
of what the kernel was doing at the point in time the trace
was taken, and is a great way to learn about how the kernel
code works in a dynamic sense.

Tying it Together: The ftrace function tracer is also
available from within perf, as the ftrace:function tracepoint.

It is a little more difficult to follow the call chains than
it needs to be - luckily there's a variant of the function
tracer that displays the callchains explicitly, called the
'function_graph' tracer:

As you can see, the function_graph display is much easier to
follow. Also note that in addition to the function calls and
associated braces, other events such as scheduler events
are displayed in context. In fact, you can freely include
any tracepoint available in the trace events subsystem described
in the next section by simply enabling those events, and they'll
appear in context in the function graph display. Quite a
powerful tool for understanding kernel dynamics.

Also notice that there are various annotations on the left
hand side of the display. For example if the total time it
took for a given function to execute is above a certain
threshold, an exclamation point or plus sign appears on the
left hand side. Please see the ftrace documentation for
details on all these fields.

One especially important directory contained within
the /sys/kernel/debug/tracing directory is the 'events'
subdirectory, which contains representations of every
tracepoint in the system. Listing out the contents of
the 'events' subdirectory, we see mainly another set of
subdirectories:

Each one of these subdirectories corresponds to a
'subsystem' and contains yet again more subdirectories,
each one of those finally corresponding to a tracepoint.
For example, here are the contents of the 'kmem' subsystem:

The 'format' file for the tracepoint describes the event
in memory, which is used by the various tracing tools
that now make use of these tracepoint to parse the event
and make sense of it, along with a 'print fmt' field that
allows tools like ftrace to display the event as text.
Here's what the format of the kmalloc event looks like:

The 'enable' file in the tracepoint directory is what allows
the user (or tools such as trace-cmd) to actually turn the
tracepoint on and off. When enabled, the corresponding
tracepoint will start appearing in the ftrace 'trace'
file described previously. For example, this turns on the
kmalloc tracepoint:

At the moment, we're not interested in the function tracer or
some other tracer that might be in effect, so we first turn
it off, but if we do that, we still need to turn tracing on in
order to see the events in the output buffer:

You can enable any number of events or complete subsystems
(by using the 'enable' file in the subsystem directory) and
get an arbitrarily fine-grained idea of what's going on in the
system by enabling as many of the appropriate tracepoints
as applicable.

A number of the tools described in this HOWTO do just that,
including trace-cmd and kernelshark in the next section.

Tying it Together: These tracepoints and their representation
are used not only by ftrace, but by many of the other tools
covered in this document and they form a central point of
integration for the various tracers available in Linux.
They form a central part of the instrumentation for the
following tools: perf, lttng, ftrace, blktrace and SystemTap

Tying it Together: Eventually all the special-purpose tracers
currently available in /sys/kernel/debug/tracing will be
removed and replaced with equivalent tracers based on the
'trace events' subsystem.

trace-cmd is essentially an extensive command-line 'wrapper'
interface that hides the details of all the individual files
in /sys/kernel/debug/tracing, allowing users to specify
specific particular events within the
/sys/kernel/debug/tracing/events/ subdirectory and to collect
traces and avoid having to deal with those details directly.

As yet another layer on top of that, kernelshark provides a GUI
that allows users to start and stop traces and specify sets
of events using an intuitive interface, and view the
output as both trace events and as a per-CPU graphical
display. It directly uses 'trace-cmd' as the plumbing
that accomplishes all that underneath the covers (and
actually displays the trace-cmd command it uses, as we'll see).

To start a trace using kernelshark, first start kernelshark:

root@sugarbay:~# kernelshark

Then bring up the 'Capture' dialog by choosing from the
kernelshark menu:

Capture | Record

That will display the following dialog, which allows you to
choose one or more events (or even one or more complete
subsystems) to trace:

Note that these are exactly the same sets of events described
in the previous trace events subsystem section, and in fact
is where trace-cmd gets them for kernelshark.

In the above screenshot, we've decided to explore the
graphics subsystem a bit and so have chosen to trace all
the tracepoints contained within the 'i915' and 'drm'
subsystems.

After doing that, we can start and stop the trace using
the 'Run' and 'Stop' button on the lower right corner of
the dialog (the same button will turn into the 'Stop'
button after the trace has started):

Notice that the right-hand pane shows the exact trace-cmd
command-line that's used to run the trace, along with the
results of the trace-cmd run.

Once the 'Stop' button is pressed, the graphical view magically
fills up with a colorful per-cpu display of the trace data,
along with the detailed event listing below that:

Here's another example, this time a display resulting
from tracing 'all events':

The tool is pretty self-explanatory, but for more detailed
information on navigating through the data, see the
kernelshark website.

SystemTap scripts are C-like programs that are executed in the
kernel to gather/print/aggregate data extracted from the context
they end up being invoked under.

For example, this probe from the
SystemTap tutorial
simply prints a line every time any process on the system open()s
a file. For each line, it prints the executable name of the
program that opened the file, along with its PID, and the name
of the file it opened (or tried to open), which it extracts
from the open syscall's argstr.

Normally, to execute this probe, you'd simply install
systemtap on the system you want to probe, and directly run
the probe on that system e.g. assuming the name of the file
containing the above text is trace_open.stp:

# stap trace_open.stp

What systemtap does under the covers to run this probe is 1)
parse and convert the probe to an equivalent 'C' form, 2)
compile the 'C' form into a kernel module, 3) insert the
module into the kernel, which arms it, and 4) collect the data
generated by the probe and display it to the user.

In order to accomplish steps 1 and 2, the 'stap' program needs
access to the kernel build system that produced the kernel
that the probed system is running. In the case of a typical
embedded system (the 'target'), the kernel build system
unfortunately isn't typically part of the image running on
the target. It is normally available on the 'host' system
that produced the target image however; in such cases,
steps 1 and 2 are executed on the host system, and steps
3 and 4 are executed on the target system, using only the
systemtap 'runtime'.

The systemtap support in Yocto assumes that only steps
3 and 4 are run on the target; it is possible to do
everything on the target, but this section assumes only
the typical embedded use-case.

So basically what you need to do in order to run a systemtap
script on the target is to 1) on the host system, compile the
probe into a kernel module that makes sense to the target, 2)
copy the module onto the target system and 3) insert the
module into the target kernel, which arms it, and 4) collect
the data generated by the probe and display it to the user.

Those are a lot of steps and a lot of details, but
fortunately Yocto includes a script called 'crosstap'
that will take care of those details, allowing you to
simply execute a systemtap script on the remote target,
with arguments if necessary.

In order to do this from a remote host, however, you
need to have access to the build for the image you
booted. The 'crosstap' script provides details on how
to do this if you run the script on the host without having
done a build:

Note

SystemTap, which uses 'crosstap', assumes you can establish an
ssh connection to the remote target.
Please refer to the crosstap wiki page for details on verifying
ssh connections at
https://wiki.yoctoproject.org/wiki/Tracing_and_Profiling#systemtap.
Also, the ability to ssh into the target system is not enabled
by default in *-minimal images.

$ crosstap root@192.168.1.88 trace_open.stp
Error: No target kernel build found.
Did you forget to create a local build of your image?
'crosstap' requires a local sdk build of the target system
(or a build that includes 'tools-profile') in order to build
kernel modules that can probe the target system.
Practically speaking, that means you need to do the following:
- If you're running a pre-built image, download the release
and/or BSP tarballs used to build the image.
- If you're working from git sources, just clone the metadata
and BSP layers needed to build the image you'll be booting.
- Make sure you're properly set up to build a new image (see
the BSP README and/or the widely available basic documentation
that discusses how to build images).
- Build an -sdk version of the image e.g.:
$ bitbake core-image-sato-sdk
OR
- Build a non-sdk image but include the profiling tools:
[ edit local.conf and add 'tools-profile' to the end of
the EXTRA_IMAGE_FEATURES variable ]
$ bitbake core-image-sato
Once you've build the image on the host system, you're ready to
boot it (or the equivalent pre-built image) and use 'crosstap'
to probe it (you need to source the environment as usual first):
$ source oe-init-build-env
$ cd ~/my/systemtap/scripts
$ crosstap root@192.168.1.xxx myscript.stp

So essentially what you need to do is build an SDK image or
image with 'tools-profile' as detailed in the
"General Setup"
section of this manual, and boot the resulting target image.

Note

If you have a build directory containing multiple machines,
you need to have the MACHINE you're connecting to selected
in local.conf, and the kernel in that machine's build
directory must match the kernel on the booted system exactly,
or you'll get the above 'crosstap' message when you try to
invoke a script.

Once you've done that, you should be able to run a systemtap
script on the target:

$ cd /path/to/yocto
$ source oe-init-build-env
### Shell environment set up for builds. ###
You can now run 'bitbake <target>'
Common targets are:
core-image-minimal
core-image-sato
meta-toolchain
meta-ide-support
You can also run generated qemu images with a command like 'runqemu qemux86'

Once you've done that, you can cd to whatever directory
contains your scripts and use 'crosstap' to run the script:

For this section, we'll assume you've already performed the
basic setup outlined in the General Setup section.

Sysprof is a GUI-based application that runs on the target
system. For the rest of this document we assume you've
ssh'ed to the host and will be running Sysprof on the
target (you can use the '-X' option to ssh and have the
Sysprof GUI run on the target but display remotely on the
host if you want).

To start profiling the system, you simply press the 'Start'
button. To stop profiling and to start viewing the profile data
in one easy step, press the 'Profile' button.

Once you've pressed the profile button, the three panes will
fill up with profiling data:

The left pane shows a list of functions and processes.
Selecting one of those expands that function in the right
pane, showing all its callees. Note that this caller-oriented
display is essentially the inverse of perf's default
callee-oriented callchain display.

In the screenshot above, we're focusing on __copy_to_user_ll()
and looking up the callchain we can see that one of the callers
of __copy_to_user_ll is sys_read() and the complete callpath
between them. Notice that this is essentially a portion of the
same information we saw in the perf display shown in the perf
section of this page.

Similarly, the above is a snapshot of the Sysprof display of a
copy-from-user callchain.

Finally, looking at the third Sysprof pane in the lower left,
we can see a list of all the callers of a particular function
selected in the top left pane. In this case, the lower pane is
showing all the callers of __mark_inode_dirty:

Double-clicking on one of those functions will in turn change the
focus to the selected function, and so on.

Tying it Together: If you like sysprof's 'caller-oriented'
display, you may be able to approximate it in other tools as
well. For example, 'perf report' has the -g (--call-graph)
option that you can experiment with; one of the options is
'caller' for an inverted caller-based callgraph display.

Once you've applied the above commits and built and booted your
image (you need to build the core-image-sato-sdk image or use one of the
other methods described in the General Setup section), you're
ready to start tracing.

3.5.2.1. Collecting and viewing a trace on the target (inside a shell)¶

Note that the trace is saved in a directory of the same
name as returned by 'lttng create', under the ~/lttng-traces
directory (note that you can change this by supplying your
own name to 'lttng create'):

3.5.2.3. Manually copying a trace to the host and viewing it in Eclipse (i.e. using Eclipse without network support)¶

If you already have an LTTng trace on a remote target and
would like to view it in Eclipse on the host, you can easily
copy it from the target to the host and import it into
Eclipse to view it using the LTTng Eclipse plug-in already
bundled in the Eclipse (Juno SR1 or greater).

Using the trace we created in the previous section, archive
it and copy it to your host system:

Note

This section on collecting traces remotely doesn't currently
work because of Eclipse 'RSE' connectivity problems. Manually
tracing on the target, copying the trace files to the host,
and viewing the trace in Eclipse on the host as outlined in
previous steps does work however - please use the manual
steps outlined above to view traces in Eclipse.

In order to trace a remote target, you also need to add
a 'tracing' group on the target and connect as a user
who's part of that group e.g:

# adduser tomz
# groupadd -r tracing
# usermod -a -G tracing tomz

First, start eclipse and open the
'LTTng Kernel' perspective by selecting the following
menu item:

Window | Open Perspective | Other...

In the dialog box that opens, select
'LTTng Kernel' from the list.

Back at the main menu, select the
following menu item:

File | New | Project...

In the dialog box that opens, select
the 'Tracing | Tracing Project' wizard and
press 'Next>'.

Give the project a name and press
'Finish'. That should result in an entry in the
'Project' subwindow.

In the 'Control' subwindow just below
it, press 'New Connection'.

Add a new connection, giving it the
hostname or IP address of the target system.

Provide the username and password
of a qualified user (a member of the 'tracing' group)
or root account on the target system.

Provide appropriate answers to whatever
else is asked for e.g. 'secure storage password'
can be anything you want.
If you get an 'RSE Error' it may be due to proxies.
It may be possible to get around the problem by
changing the following setting:

You can find the primary LTTng Documentation on the
LTTng Documentation
site.
The documentation on this site is appropriate for intermediate to
advanced software developers who are working in a Linux environment
and are interested in efficient software tracing.

For information on LTTng in general, visit the
LTTng Project
site.
You can find a "Getting Started" link on this site that takes
you to an LTTng Quick Start.

Finally, you can access extensive help information on how to use
the LTTng plug-in to search and analyze captured traces via the
Eclipse help system:

blktrace is a tool for tracing and reporting low-level disk I/O.
blktrace provides the tracing half of the equation; its output can
be piped into the blkparse program, which renders the data in a
human-readable form and does some basic analysis:

For this section, we'll assume you've already performed the
basic setup outlined in the
"General Setup"
section.

blktrace is an application that runs on the target system.
You can run the entire blktrace and blkparse pipeline on the
target, or you can run blktrace in 'listen' mode on the target
and have blktrace and blkparse collect and analyze the data on
the host (see the
"Using blktrace Remotely"
section below).
For the rest of this section we assume you've ssh'ed to the
host and will be running blkrace on the target.

Press Ctrl-C in the blktrace shell to stop the trace. It will
display how many events were logged, along with the per-cpu file
sizes (blktrace records traces in per-cpu kernel buffers and
simply dumps them to userspace for blkparse to merge and sort
later).

The report shows each event that was found in the blktrace data,
along with a summary of the overall block I/O traffic during
the run. You can look at the
blkparse
manpage to learn the
meaning of each field displayed in the trace listing.

blktrace and blkparse are designed from the ground up to
be able to operate together in a 'pipe mode' where the
stdout of blktrace can be fed directly into the stdin of
blkparse:

root@crownbay:~# blktrace /dev/sdc -o - | blkparse -i -

This enables long-lived tracing sessions to run without
writing anything to disk, and allows the user to look for
certain conditions in the trace data in 'real-time' by
viewing the trace output as it scrolls by on the screen or
by passing it along to yet another program in the pipeline
such as grep which can be used to identify and capture
conditions of interest.

There's actually another blktrace command that implements
the above pipeline as a single command, so the user doesn't
have to bother typing in the above command sequence:

Because blktrace traces block I/O and at the same time
normally writes its trace data to a block device, and
in general because it's not really a great idea to make
the device being traced the same as the device the tracer
writes to, blktrace provides a way to trace without
perturbing the traced device at all by providing native
support for sending all trace data over the network.

To have blktrace operate in this mode, start blktrace on
the target system being traced with the -l option, along with
the device to trace:

In one of our previous releases (denzil), users noticed that booting
off of a live image and writing to disk was noticeably slower.
This included the boot itself, especially the first one, since first
boots tend to do a significant amount of writing due to certain
post-install scripts.

The problem (and solution) was discovered by using the Yocto tracing
tools, in this case 'perf stat', 'perf script', 'perf record'
and 'perf report'.

See all the unvarnished details of how this bug was diagnosed and
solved here: Yocto Bug #3049